15 resultados para Power factor correction

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper examines the ability of the doubly fed induction generator (DFIG) to deliver multiple reactive power objectives during variable wind conditions. The reactive power requirement is decomposed based on various control objectives (e.g. power factor control, voltage control, loss minimisation, and flicker mitigation) defined around different time frames (i.e. seconds, minutes, and hourly), and the control reference is generated by aggregating the individual reactive power requirement for each control strategy. A novel coordinated controller is implemented for the rotor-side converter and the grid-side converter considering their capability curves and illustrating that it can effectively utilise the aggregated DFIG reactive power capability for system performance enhancement. The performance of the multi-objective strategy is examined for a range of wind and network conditions, and it is shown that for the majority of the scenarios, more than 92% of the main control objective can be achieved while introducing the integrated flicker control scheme with the main reactive power control scheme. Therefore, optimal control coordination across the different control strategies can maximise the availability of ancillary services from DFIG-based wind farms without additional dynamic reactive power devices being installed in power networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel application-specific instruction set processor (ASIP) for use in the construction of modern signal processing systems is presented. This is a flexible device that can be used in the construction of array processor systems for the real-time implementation of functions such as singular-value decomposition (SVD) and QR decomposition (QRD), as well as other important matrix computations. It uses a coordinate rotation digital computer (CORDIC) module to perform arithmetic operations and several approaches are adopted to achieve high performance including pipelining of the micro-rotations, the use of parallel instructions and a dual-bus architecture. In addition, a novel method for scale factor correction is presented which only needs to be applied once at the end of the computation. This also reduces computation time and enhances performance. Methods are described which allow this processor to be used in reduced dimension (i.e., folded) array processor structures that allow tradeoffs between hardware and performance. The net result is a flexible matrix computational processing element (PE) whose functionality can be changed under program control for use in a wider range of scenarios than previous work. Details are presented of the results of a design study, which considers the application of this decomposition PE architecture in a combined SVD/QRD system and demonstrates that a combination of high performance and efficient silicon implementation are achievable. © 2005 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents a multiple input single output fuzzy logic governor algorithm that can be used to improve the transient response of a diesel generating set, when supplying an islanded load. The proposed governor uses the traditional speed input in addition to voltage and power factor to modify the fuelling requirements during various load disturbances. The use of fuzzy logic control allows the use of PID type structures that can provide variable gain strategies to account for non-linearities in the system. Fuzzy logic also provides a means of processing other input information by linguistic reasoning and a logical control output to aid the governor action during transient disturbance. The test results were obtained using a 50 kVA naturally aspirated diesel generator testing facility. Both real and reactive load tests were conducted. The complex load test results demonstrate that, by using additional inputs to the governor algorithm, enhanced generator transient speed recovery response can be obtained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extrusion is one of the fundamental production methods in the polymer processing industry and is used in the production of a large number of commodities in a diverse industrial sector. Being an energy intensive production method, process energy efficiency is one of the major concerns and the selection of the most energy efficient processing conditions is a key to reducing operating costs. Usually, extruders consume energy through the drive motor, barrel heaters, cooling fans, cooling water pumps, gear pumps, etc. Typically the drive motor is the largest energy consuming device in an extruder while barrel/die heaters are responsible for the second largest energy demand. This study is focused on investigating the total energy demand of an extrusion plant under various processing conditions while identifying ways to optimise the energy efficiency. Initially, a review was carried out on the monitoring and modelling of the energy consumption in polymer extrusion. Also, the power factor, energy demand and losses of a typical extrusion plant were discussed in detail. The mass throughput, total energy consumption and power factor of an extruder were experimentally observed over different processing conditions and the total extruder energy demand was modelled empirically and also using a commercially available extrusion simulation software. The experimental results show that extruder energy demand is heavily coupled between the machine, material and process parameters. The total power predicted by the simulation software exhibits a lagging offset compared with the experimental measurements. Empirical models are in good agreement with the experimental measurements and hence these can be used in studying process energy behaviour in detail and to identify ways to optimise the process energy efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An SVD processor system is presented in which each processing element is implemented using a simple CORDIC unit. The internal recursive loop within the CORDIC module is exploited, with pipelining being used to multiplex the two independent micro-rotations onto a single CORDIC processor. This leads to a high performance and efficient hardware architecture. In addition, a novel method for scale factor correction is presented which only need be applied once at the end of the computation. This also reduces the computation time. The net result is an SVD architecture based on a conventional CORDIC approach, which combines high performance with high silicon area efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power-handling capabilities of helical resonator filters for space applications are discussed. Emerging difficulties due to the multipaction effects are highlighted. A method is proposed to increase specified power handling without significantly sacrificing the size/quality factor. Experimental verification is attained by means of a fabricated prototype for which measured filter response and multipaction test results are obtained and presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type 1 diabetes (T1D) increases risk of the development of microvascular complications and cardiovascular disease (CVD). Dyslipidemia is a common risk factor in the pathogenesis of both CVD and diabetic nephropathy (DN), with CVD identified as the primary cause of death in patients with DN. In light of this commonality, we assessed single nucleotide polymorphisms (SNPs) in thirty-seven key genetic loci previously associated with dyslipidemia in a T1D cohort using a casecontrol design. SNPs (n = 53) were genotyped using Sequenom in 1467 individuals with T1D (718 cases with proteinuric nephropathy and 749 controls without nephropathy i.e. normal albumin excretion). Cases and controls were white and recruited from the UK and Ireland. Association analyses were performed using PLINK to compare allele frequencies in cases and controls. In a sensitivity analysis, samples from control individuals with reduced renal function (estimated glomerular filtration rate,60 ml/min/1.73 m2) were excluded. Correction for multiple testing was performed by permutation testing. A total of 1394 samples passed quality control filters. Following regression analysis adjusted by collection center, gender, duration of diabetes, and average HbA1c, two SNPs were significantly associated with DN. rs4420638 in the APOC1 region (odds ratio [OR] = 1.51; confidence intervals [CI]: 1.19–1.91; P = 0.001) and rs1532624 in CETP (OR = 0.82; CI: 0.69–0.99; P = 0.034); rs4420638 was also significantly associated in a sensitivity analysis (P = 0.016) together with rs7679 (P = 0.027). However, no association was significant following correction for multiple testing. Subgroup analysis of end-stage renal disease status failed to reveal any association. Our results suggest common variants associated with dyslipidemia are not strongly associated with DN in T1D among white individuals. Our findings, cannot entirely exclude these key genes which are central to the process of dyslipidemia, from involvement in DN pathogenesis as our study had limited power to detect variants of small effect size. Analysis in larger independent cohorts is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the uplink achievable rates of massive multiple-input multiple-output (MIMO) antenna systems in Ricean fading channels, using maximal-ratio combining (MRC) and zero-forcing (ZF) receivers, assuming perfect and imperfect channel state information (CSI). In contrast to previous relevant works, the fast fading MIMO channel matrix is assumed to have an arbitrary-rank deterministic component as well as a Rayleigh-distributed random component. We derive tractable expressions for the achievable uplink rate in the large-antenna limit, along with approximating results that hold for any finite number of antennas. Based on these analytical results, we obtain the scaling law that the users' transmit power should satisfy, while maintaining a desirable quality of service. In particular, it is found that regardless of the Ricean K-factor, in the case of perfect CSI, the approximations converge to the same constant value as the exact results, as the number of base station antennas, M, grows large, while the transmit power of each user can be scaled down proportionally to 1/M. If CSI is estimated with uncertainty, the same result holds true but only when the Ricean K-factor is non-zero. Otherwise, if the channel experiences Rayleigh fading, we can only cut the transmit power of each user proportionally to 1/√M. In addition, we show that with an increasing Ricean K-factor, the uplink rates will converge to fixed values for both MRC and ZF receivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomarkers for Alzheimer's disease (AD) should meet several criteria, including simplicity of testing. Inappropriate activation of the complement cascade has been implicated in the pathogenesis of AD. Complement factor H (CFH) is a regulator of the cascade, but studies on plasma CFH levels in AD have provided mixed results. This study compared plasma CFH levels in 317 AD cases with 254 controls using an immunodiffusion assay. The sample had an 80% power to detect a difference of 23 mg/L between cases and controls, but no difference was evident. Plasma CFH may not be a suitable biomarker for AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Neisseria meningitidis can cause severe infection in humans. Polymorphism of Complement Factor H (CFH) is associated with altered risk of invasive meningococcal disease (IMD). We aimed to find whether polymorphism of other complement genes altered risk and whether variation of N. meningitidis factor H binding protein (fHBP) affected the risk association.

METHODS: We undertook a case-control study with 309 European cases and 5,200 1958 Birth Cohort and National Blood Service cohort controls. We used additive model logistic regression, accepting P<0.05 as significant after correction for multiple testing. The effects of fHBP subfamily on the age at infection and severity of disease was tested using the independent samples median test and Student's T test. The effect of CFH polymorphism on the N. meningitidis fHBP subfamily was investigated by logistic regression and Chi squared test.

RESULTS: Rs12085435 A in C8B was associated with odds ratio (OR) of IMD (0.35 [95% CI 0.19-0.67]; P = 0.03 after correction). A CFH haplotype tagged by rs3753396 G was associated with IMD (OR 0.56 [95% CI 0.42-0.76], P = 1.6x10-4). There was no bacterial load (CtrA cycle threshold) difference associated with carriage of this haplotype. Host CFH haplotype and meningococcal fHBP subfamily were not associated. Individuals infected with meningococci expressing subfamily A fHBP were younger than those with subfamily B fHBP meningococci (median 1 vs 2 years; P = 0.025).

DISCUSSION: The protective CFH haplotype alters odds of IMD without affecting bacterial load for affected heterozygotes. CFH haplotype did not affect the likelihood of infecting meningococci having either fHBP subfamily. The association between C8B rs12085435 and IMD requires independent replication. The CFH association is of interest because it is independent of known functional polymorphisms in CFH. As fHBP-containing vaccines are now in use, relationships between CFH polymorphism and vaccine effectiveness and side-effects may become important.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare outcomes between adjustable spectacles and conventional methods for refraction in young people. DESIGN: Cross sectional study. SETTING: Rural southern China. PARTICIPANTS: 648 young people aged 12-18 (mean 14.9 (SD 0.98)), with uncorrected visual acuity ≤ 6/12 in either eye. INTERVENTIONS: All participants underwent self refraction without cycloplegia (paralysis of near focusing ability with topical eye drops), automated refraction without cycloplegia, and subjective refraction by an ophthalmologist with cycloplegia. MAIN OUTCOME MEASURES: Uncorrected and corrected vision, improvement of vision (lines on a chart), and refractive error. RESULTS: Among the participants, 59% (384) were girls, 44% (288) wore spectacles, and 61% (393/648) had 2.00 dioptres or more of myopia in the right eye. All completed self refraction. The proportion with visual acuity ≥ 6/7.5 in the better eye was 5.2% (95% confidence interval 3.6% to 6.9%) for uncorrected vision, 30.2% (25.7% to 34.8%) for currently worn spectacles, 96.9% (95.5% to 98.3%) for self refraction, 98.4% (97.4% to 99.5%) for automated refraction, and 99.1% (98.3% to 99.9%) for subjective refraction (P = 0.033 for self refraction v automated refraction, P = 0.001 for self refraction v subjective refraction). Improvements over uncorrected vision in the better eye with self refraction and subjective refraction were within one line on the eye chart in 98% of participants. In logistic regression models, failure to achieve maximum recorded visual acuity of 6/7.5 in right eyes with self refraction was associated with greater absolute value of myopia/hyperopia (P<0.001), greater astigmatism (P = 0.001), and not having previously worn spectacles (P = 0.002), but not age or sex. Significant inaccuracies in power (≥ 1.00 dioptre) were less common in right eyes with self refraction than with automated refraction (5% v 11%, P<0.001). CONCLUSIONS: Though visual acuity was slightly worse with self refraction than automated or subjective refraction, acuity was excellent in nearly all these young people with inadequately corrected refractive error at baseline. Inaccurate power was less common with self refraction than automated refraction. Self refraction could decrease the requirement for scarce trained personnel, expensive devices, and cycloplegia in children's vision programmes in rural China.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.