14 resultados para Topology-based methods

em University of Queensland eSpace - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a deregulated electricity market, optimizing dispatch capacity and transmission capacity are among the core concerns of market operators. Many market operators have capitalized on linear programming (LP) based methods to perform market dispatch operation in order to explore the computational efficiency of LP. In this paper, the search capability of genetic algorithms (GAs) is utilized to solve the market dispatch problem. The GA model is able to solve pool based capacity dispatch, while optimizing the interconnector transmission capacity. Case studies and corresponding analyses are performed to demonstrate the efficiency of the GA model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new differential evolution (DE) based power system optimal available transfer capability (ATC) assessment is presented. Power system total transfer capability (TTC) is traditionally solved by the repeated power flow (RPF) method and the continuation power flow (CPF) method. These methods are based on the assumption that the productions of the source area generators are increased in identical proportion to balance the load increment in the sink area. A new approach based on DE algorithm to generate optimal dispatch both in source area generators and sink area loads is proposed in this paper. This new method can compute ATC between two areas with significant improvement in accuracy compared with the traditional RPF and CPF based methods. A case study using a 30 bus system is given to verify the efficiency and effectiveness of this new DE based ATC optimization approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method is presented for calculating the winding patterns required to design independent zonal and tesseral biplanar shim coils for magnetic resonance imaging. Streamline, target-field, Fourier integral and Fourier series methods are utilized. For both Fourier-based methods, the desired target field is specified on the surface of the conducting plates. For the Fourier series method it is possible to specify the target field at additional depths interior to the two conducting plates. The conducting plates are confined symmetrically in the xy plane with dimensions 2a x 2b, and are separated by 2d in the z direction. The specification of the target field is symmetric for the Fourier integral method, but can be over some asymmetric portion pa < x < qa and sb < y < tb of the coil dimensions (-1 < p < q < 1 and -1 < s < t < 1) for the Fourier series method. Arbitrary functions are used in the outer sections to ensure continuity of the magnetic field across the entire coil face. For the Fourier series case, the entire field is periodically extended as double half-range sine or cosine series. The resultant Fourier coefficients are substituted into the Fourier series and integral expressions for the internal and external magnetic fields, and stream functions on both the conducting surfaces. A contour plot of the stream function directly gives the required coil winding patterns. Spherical harmonic analysis of field calculations from a ZX shim coil indicates that example designs and theory are well matched.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we investigate a Bayesian procedure for the estimation of a flexible generalised distribution, notably the MacGillivray adaptation of the g-and-κ distribution. This distribution, described through its inverse cdf or quantile function, generalises the standard normal through extra parameters which together describe skewness and kurtosis. The standard quantile-based methods for estimating the parameters of generalised distributions are often arbitrary and do not rely on computation of the likelihood. MCMC, however, provides a simulation-based alternative for obtaining the maximum likelihood estimates of parameters of these distributions or for deriving posterior estimates of the parameters through a Bayesian framework. In this paper we adopt the latter approach, The proposed methodology is illustrated through an application in which the parameter of interest is slightly skewed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Poly-beta-hydroxyalkanoate (PHA) is a polymer commonly used in carbon and energy storage for many different bacterial cells. Polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), store PHA anaerobically through metabolism of carbon substrates such as acetate and propionate. Although poly-beta-hydroxybutyrate (PHB)and poly-beta-hydroxyvalerate (PHV) are commonly quantified using a previously developed gas chromatography (GC) method, poly-beta-hydroxy-2-methyl valerate (PH2MV) is seldom quantified despite the fact that it has been shown to be a key PHA fraction produced when PAOs or GAOs metabolise propionate. This paper presents two GC-based methods modified for extraction and quantification of PHB, PHV and PH2MV from enhanced biological phosphorus removal (EBPR) systems. For the extraction Of PHB and PHV from acetate fed PAO and GAO cultures, a 3% sulfuric acid concentration and a 2-20 h digestion time is recommended, while a 10% sulfuric acid solution digested for 20 h is recommended for PHV and PH2MV analysis from propionate fed EBPR systems. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Testing for simultaneous vicariance across comparative phylogeographic data sets is a notoriously difficult problem hindered by mutational variance, the coalescent variance, and variability across pairs of sister taxa in parameters that affect genetic divergence. We simulate vicariance to characterize the behaviour of several commonly used summary statistics across a range of divergence times, and to characterize this behaviour in comparative phylogeographic datasets having multiple taxon-pairs. We found Tajima's D to be relatively uncorrelated with other summary statistics across divergence times, and using simple hypothesis testing of simultaneous vicariance given variable population sizes, we counter-intuitively found that the variance across taxon pairs in Nei and Li's net nucleotide divergence (pi(net)), a common measure of population divergence, is often inferior to using the variance in Tajima's D across taxon pairs as a test statistic to distinguish ancient simultaneous vicariance from variable vicariance histories. The opposite and more intuitive pattern is found for testing more recent simultaneous vicariance, and overall we found that depending on the timing of vicariance, one of these two test statistics can achieve high statistical power for rejecting simultaneous vicariance, given a reasonable number of intron loci (> 5 loci, 400 bp) and a range of conditions. These results suggest that components of these two composite summary statistics should be used in future simulation-based methods which can simultaneously use a pool of summary statistics to test comparative the phylogeographic hypotheses we consider here.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results: We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion: The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to identify a set of genetic polymorphisms that efficiently divides methicillin-resistant Staphylococcus aureus (MRSA) strains into groups consistent with the population structure. The rationale was that such polymorphisms could underpin rapid real-time PCR or low-density array-based methods for monitoring MRSA dissemination in a cost-effective manner. Previously, the authors devised a computerized method for identifying sets of single nucleoticle polymorphisms (SNPs) with high resolving power that are defined by multilocus sequence typing (MLST) databases, and also developed a real-time PCR method for interrogating a seven-member SNP set for genotyping S. aureus. Here, it is shown that these seven SNPs efficiently resolve the major MRSA lineages and define 27 genotypes. The SNP-based genotypes are consistent with the MRSA population structure as defined by eBURST analysis. The capacity of binary markers to improve resolution was tested using 107 diverse MRSA isolates of Australian origin that encompass nine SNP-based genotypes. The addition of the virulence-associated genes cna, pvl and bbplsdrE, and the integrated plasmids pT181, p1258 and pUB110, resolved the nine SNP-based genotypes into 21 combinatorial genotypes. Subtyping of the SCCmec locus revealed new SCCmec types and increased the number of combinatorial genotypes to 24. It was concluded that these polymorphisms provide a facile means of assigning MRSA isolates into well-recognized lineages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

While developments in distributed object computing environments, such as the Common Object Request Broker Architecture (CORBA) [17] and the Telecommunication Intelligent Network Architecture (TINA) [16], have enabled interoperability between domains in large open distributed systems, managing the resources within such systems has become an increasingly complex task. This challenge has been considered for several years within the distributed systems management research community and policy-based management has recently emerged as a promising solution. Large evolving enterprises present a significant challenge for policy-based management partly due to the requirement to support both mutual transparency and individual autonomy between domains [2], but also because the fluidity and complexity of interactions occurring within such environments requires an ability to cope with the coexistence of multiple, potentially inconsistent policies. This paper discusses the need of providing both dynamic (run-time) and static (compile-time) conflict detection and resolution for policies in such systems and builds on our earlier conflict detection work [7, 8] to introduce the methods for conflict resolution in large open distributed systems.