925 resultados para Power method
Resumo:
Reactive power is critical to the operation of the power networks on both safety aspects and economic aspects. Unreasonable distribution of the reactive power would severely affect the power quality of the power networks and increases the transmission loss. Currently, the most economical and practical approach to minimizing the real power loss remains using reactive power dispatch method. Reactive power dispatch problem is nonlinear and has both equality constraints and inequality constraints. In this thesis, PSO algorithm and MATPOWER 5.1 toolbox are applied to solve the reactive power dispatch problem. PSO is a global optimization technique that is equipped with excellent searching capability. The biggest advantage of PSO is that the efficiency of PSO is less sensitive to the complexity of the objective function. MATPOWER 5.1 is an open source MATLAB toolbox focusing on solving the power flow problems. The benefit of MATPOWER is that its code can be easily used and modified. The proposed method in this thesis minimizes the real power loss in a practical power system and determines the optimal placement of a new installed DG. IEEE 14 bus system is used to evaluate the performance. Test results show the effectiveness of the proposed method.
Resumo:
PCDD/F emissions from three light-duty diesel vehicles–two vans and a passenger car–have been measured in on-road conditions. We propose a new methodology for small vehicles: a sample of exhaust gas is collected by means of equipment based on United States Environmental Protection Agency (U.S. EPA) method 23A for stationary stack emissions. The concentrations of O2, CO, CO2, NO, NO2 and SO2 have also been measured. Six tests were carried out at 90-100 km/h on a route 100 km long. Two additional tests were done during the first 10 minutes and the following 60 minutes of the run to assess the effect of the engine temperature on PCDD/F emissions. The emission factors obtained for the vans varied from 1800 to 8400 pg I-TEQ/Nm3 for a 2004 model year van and 490-580 pg I-TEQ/Nm3 for a 2006 model year van. Regarding the passenger car, one run was done in the presence of a catalyst and another without, obtaining emission factors (330-880 pg I-TEQ/Nm3) comparable to those of the modern van. Two other tests were carried out on a power generator leading to emission factors ranging from 31 to 78 pg I-TEQ/Nm3. All the results are discussed and compared with literature.
Resumo:
Power line interference is one of the main problems in surface electromyogram signals (EMG) analysis. In this work, a new method based on the stationary wavelet packet transform is proposed to estimate and remove this kind of noise from EMG data records. The performance has been quantitatively evaluated with synthetic noisy signals, obtaining good results independently from the signal to noise ratio (SNR). For the analyzed cases, the obtained results show that the correlation coefficient is around 0.99, the energy respecting to the pure EMG signal is 98–104%, the SNR is between 16.64 and 20.40 dB and the mean absolute error (MAE) is in the range of −69.02 and −65.31 dB. It has been also applied on 18 real EMG signals, evaluating the percentage of energy respecting to the noisy signals. The proposed method adjusts the reduction level to the amplitude of each harmonic present in the analyzed noisy signals (synthetic and real), reducing the harmonics with no alteration of the desired signal.
Resumo:
Purpose. To validate clinically a new method for estimating the corneal power (P,) using a variable keratometric index (nkadj) in eyes with previous laser refractive surgery. Setting. University of Alicante and Medimar International Hospital (Oftalmar), Alicante, (Spain). Design. Retrospective case series. Methods. This retrospective study comprised 62 eyes of 62 patients that had undergone myopic LASIK surgery. An algorithm for the calculation of 11kadj was used for the estimation of the adjusted keratometric corneal power (Pkadj). This value was compared with the classical keratometric corneal power (Pk), the True Net Power (TNP), and the Gaussian corneal power (PcGauss). Likewise, Pkadj was compared with other previously described methods. Results. Differences between PcGauss and P, values obtained with all methods evaluated were statistically significant (p < 0.01). Differences between Pkadj and PcGauss were in the limit of clinical significance (p < 0.01, loA [ - 0.33,0.60] D). Differences between Pkadj and TNP were not statistically and clinically significant (p = 0.319, loA [- 0.50,0.44] D). Differences between Pkadj and previously described methods were statistically significant (p < 0.01), except with PcHaigisL (p = 0.09, loA [ - 0.37,0.29] D). Conclusion. The use of the adjusted keratometric index (nkadj) is a valid method to estimate the central corneal power in corneas with previous myopic laser refractive surgery, providing results comparable to PcHaigisL.
Resumo:
Many multifactorial biologic effects, particularly in the context of complex human diseases, are still poorly understood. At the same time, the systematic acquisition of multivariate data has become increasingly easy. The use of such data to analyze and model complex phenotypes, however, remains a challenge. Here, a new analytic approach is described, termed coreferentiality, together with an appropriate statistical test. Coreferentiality is the indirect relation of two variables of functional interest in respect to whether they parallel each other in their respective relatedness to multivariate reference data, which can be informative for a complex effect or phenotype. It is shown that the power of coreferentiality testing is comparable to multiple regression analysis, sufficient even when reference data are informative only to a relatively small extent of 2.5%, and clearly exceeding the power of simple bivariate correlation testing. Thus, coreferentiality testing uses the increased power of multivariate analysis, however, in order to address a more straightforward interpretable bivariate relatedness. Systematic application of this approach could substantially improve the analysis and modeling of complex phenotypes, particularly in the context of human study where addressing functional hypotheses by direct experimentation is often difficult.
Resumo:
"November 1981."
Resumo:
"This work is chiefly derived from the writings of Leslie, Fletcher, and Simpson." -Pref.
Resumo:
Senior thesis written for Oceanography 445
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Accurate estimates of body mass in fossil taxa are fundamental to paleobiological reconstruction. Predictive equations derived from correlation with craniodental and body mass data in extant taxa are the most commonly used, but they can be unreliable for species whose morphology departs widely from that of living relatives. Estimates based on proximal limb-bone circumference data are more accurate but are inapplicable where postcranial remains are unknown. In this study we assess the efficacy of predicting body mass in Australian fossil marsupials by using an alternative correlate, endocranial volume. Body mass estimates for a species with highly unusual craniodental anatomy, the Pleistocene marsupial lion (Thylacoleo carnifex), fall within the range determined on the basis of proximal limb-bone circumference data, whereas estimates based on dental data are highly dubious. For all marsupial taxa considered, allometric relationships have small confidence intervals, and percent prediction errors are comparable to those of the best predictors using craniodental data. Although application is limited in some respects, this method may provide a useful means of estimating body mass for species with atypical craniodental or postcranial morphologies and taxa unrepresented by postcranial remains. A trend toward increased encephalization may constrain the method's predictive power with respect to many, but not all, placental clades.
Resumo:
This study has three main objectives. First, it develops a generalization of the commonly used EKS method to multilateral price comparisons. It is shown that the EKS system can be generalized so that weights can be attached to each of the link comparisons used in the EKS computations. These weights can account for differing levels of reliability of the underlying binary comparisons. Second, various reliability measures and corresponding weighting schemes are presented and their merits discussed. Finally, these new methods are applied to an international data set of manufacturing prices from the ICOP project. Although theoretically superior, it appears that the empirical impact of the weighted EKS method is generally small compared to the unweighted EKS. It is also found that this impact is larger when it is applied at lower levels of aggregation. Finally, the importance of using sector specific PPPs in assessing relative levels of manufacturing productivity is indicated.
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
Aim: The aim of this study was to assess the discriminatory power and potential turn around time ( TAT) of a PCR-based method for the detection of methicillin-resistant Staphylococcus aureus (MRSA) from screening swabs. Methods: Screening swabs were examined using the current laboratory protocol of direct culture on mannitol salt agar supplemented with oxacillin (MSAO-direct). The PCR method involved pre-incubation in broth for 4 hours followed by a multiplex PCR with primers directed to mecA and nuc genes of MRSA. The reference standard was determined by pre-incubation in broth for 4 hours followed by culture on MSAO (MSAO-broth). Results: A total of 256 swabs was analysed. The rates of detection of MRSA using MSAO-direct, MSAO-broth and PCR were 10.2, 13.3 and 10.2%, respectively. For PCR, the sensitivity, specificity, positive predictive value and negative predictive values were 66.7% (95% CI 51.9 - 83.3%), 98.6% ( 95% CI 97.1 - 100%), 84.6% ( 95% CI 76.2 - 100%) and 95.2% ( 95% CI 92.4 - 98.0%), respectively, and these results were almost identical to those obtained from MSAO-direct. The agreement between MSAO-direct and PCR was 61.5% ( 95% CI 42.8 - 80.2%) for positive results, 95.6% ( 95% CI 93.0 - 98.2%) for negative results and overall was 92.2% ( 95% CI 88.9 - 95.5%). Conclusions: ( 1) The discriminatory power of PCR and MSAO-direct is similar but the level of agreement, especially for true positive results, is low. ( 2) The potential TAT for the PCR method provides a marked advantage over conventional methods. ( 3) Further modifications to the PCR method such as increased broth incubation time, use of selective broth and adaptation to real-time PCR may lead to improvement in sensitivity and TAT.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.