110 resultados para Robust methods
em University of Queensland eSpace - Australia
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
In accordance with New Zealand’s Resource Management Act 1991, in 2003, electricity generating company Genesis Energy made public its intention to apply for consent to build the Awhitu wind farm. Several community groups claiming to represent the majority opposed this application and in September 2004 consent was declined. The aim was to investigate the attitudes of local community members to the proposed wind farm. A survey was mailed to 500 Franklin residents, systematically selected from the local 2004/2005 telephone directory. Forty questionnaires were returned undelivered. Of the remaining 460, completed questionnaires were returned from 46% (211). Most, 70% (145), residents supported a wind farm being built in their area, with 17% (35) neutral, and only 13% (28) against the farm. There was no statistical difference in respondents’ attitudes between sex, age, or residential proximity to the farm. Respondents listed renewable resource (83%), suitability (78%), and environmental friendliness (76%) as main advantages. Visual unsightliness (24%) and noise pollution (21%) were listed as main perceived disadvantages. Contrary to the assertions of several lobby groups, the majority of local residents support the construction of the Awhitu wind farm. Scientifically robust methods are essential to measure appropriately community attitudes, particularly on contentious issues.
Resumo:
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.
Resumo:
The acquisition of HI Parkes All Shy Survey (HIPASS) southern sky data commenced at the Australia Telescope National Facility's Parkes 64-m telescope in 1997 February, and was completed in 2000 March. HIPASS is the deepest HI survey yet of the sky south of declination +2 degrees, and is sensitive to emission out to 170 h(75)(-1) Mpc. The characteristic root mean square noise in the survey images is 13.3 mJy. This paper describes the survey observations, which comprise 23 020 eight-degree scans of 9-min duration, and details the techniques used to calibrate and image the data. The processing algorithms are successfully designed to be statistically robust to the presence of interference signals, and are particular to imaging point (or nearly point) sources. Specifically, a major improvement in image quality is obtained by designing a median-gridding algorithm which uses the median estimator in place of the mean estimator.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Uncontrolled systems (x) over dot is an element of Ax, where A is a non-empty compact set of matrices, and controlled systems (x) over dot is an element of Ax + Bu are considered. Higher-order systems 0 is an element of Px - Du, where and are sets of differential polynomials, are also studied. It is shown that, under natural conditions commonly occurring in robust control theory, with some mild additional restrictions, asymptotic stability of differential inclusions is guaranteed. The main results are variants of small-gain theorems and the principal technique used is the Krasnosel'skii-Pokrovskii principle of absence of bounded solutions.
Resumo:
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (C) 2004 American Institute of Physics.
Resumo:
The diversity of the networks (wired/wireless) prefers a TCP solution robust across a wide range of networks rather than fine-tuned for a particular one at the cost of another. TCP parallelization uses multiple virtual TCP connections to transfer data for an application process and opens a way to improve TCP performance across a wide range of environments - high bandwidth-delay product (BDP), wireless as well as conventional networks. In particular, it can significantly benefit the emerging high-speed wireless networks. Despite its potential to work well over a wide range of networks, it is not fully understood how TCP parallelization performs when experiencing various packet losses in the heterogeneous environment. This paper examines the current TCP parallelization related methods under various packet losses and shows how to improve the performance of TCP parallelization.
Resumo:
One of critical challenges in automatic recognition of TV commercials is to generate a unique, robust and compact signature. Uniqueness indicates the ability to identify the similarity among the commercial video clips which may have slight content variation. Robustness means the ability to match commercial video clips containing the same content but probably with different digitalization/encoding, some noise data, and/or transmission and recording distortion. Efficiency is about the capability of effectively matching commercial video sequences with a low computation cost and storage overhead. In this paper, we present a binary signature based method, which meets all the three criteria above, by combining the techniques of ordinal and color measurements. Experimental results on a real large commercial video database show that our novel approach delivers a significantly better performance comparing to the existing methods.
Resumo:
Experimental mechanical sieving methods are applied to samples of shellfish remains from three sites in southeast Queensland, Seven Mile Creek Mound, Sandstone Point and One-Tree, to test the efficacy of various recovery and quantification procedures commonly applied to shellfish assemblages in Australia. There has been considerable debate regarding the most appropriate sieve sizes and quantification methods that should be applied in the recovery of vertebrate faunal remains. Few studies, however, have addressed the impact of recovery and quantification methods on the interpretation of invertebrates, specifically shellfish remains. In this study, five shellfish taxa representing four bivalves (Anadara trapezia, Trichomya hirsutus, Saccostrea glomerata, Donax deltoides) and one gastropod (Pyrazus ebeninus) common in eastern Australian midden assemblages are sieved through 10mm, 6.3mm and 3.15mm mesh. Results are quantified using MNI, NISP and weight. Analyses indicate that different structural properties and pre- and postdepositional factors affect recovery rates. Fragile taxa (T. hirsutus) or those with foliated structure (S. glomerata) tend to be overrepresented by NISP measures in smaller sieve fractions, while more robust taxa (A. trapezia and P. ebeninus) tend to be overrepresented by weight measures. Results demonstrate that for all quantification methods tested a 3mm sieve should be used on all sites to allow for regional comparability and to effectively collect all available information about the shellfish remains.
Resumo:
This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
We propose quadrature rules for the approximation of line integrals possessing logarithmic singularities and show their convergence. In some instances a superconvergence rate is demonstrated.