20 resultados para Robustness
em University of Queensland eSpace - Australia
Resumo:
An operational space map is an efficient tool to compare a large number of operational strategies to find an optimal choice of setpoints based on a multicriterion. Typically, such a multicriterion includes a weighted sum of cost of operation and effluent quality. Due to the relative high cost of aeration such a definition of optimality result in a relatively high fraction of the effluent total nitrogen in the form of ammonium. Such a strategy may however introduce a risk into operation because a low degree of ammonium removal leads to a low amount of nitrifiers. This in turn leads to a reduced ability to reject event disturbances, such as large variations in the ammonium load, drop in temperature, the presence of toxic/inhibitory compounds in the influent etc. Hedging is a risk minimisation tool, with the aim to "reduce one's risk of loss on a bet or speculation by compensating transactions on the other side" (The Concise Oxford Dictionary (1995)). In wastewater treatment plant operation hedging can be applied by choosing a higher level of ammonium removal to increase the amount of nitrifiers. This is a sensible way to introduce disturbance rejection ability into the multi criterion. In practice, this is done by deciding upon an internal effluent ammonium criterion. In some countries such as Germany, a separate criterion already applies to the level of ammonium in the effluent. However, in most countries the effluent criterion applies to total nitrogen only. In these cases, an internal effluent ammonium criterion should be selected in order to secure proper disturbance rejection ability.
Resumo:
This paper investigates the robustness of a range of short–term interest rate models. We examine the robustness of these models over different data sets, time periods, sampling frequencies, and estimation techniques. We examine a range of popular one–factor models that allow the conditional mean (drift) and conditional variance (diffusion) to be functions of the current short rate. We find that parameter estimates are highly sensitive to all of these factors in the eight countries that we examine. Since parameter estimates are not robust, these models should be used with caution in practice.
Resumo:
We define several quantitative measures of the robustness of a quantum gate against noise. Exact analytic expressions for the robustness against depolarizing noise are obtained for all bipartite unitary quantum gates, and it is found that the controlled-NOT gate is the most robust two-qubit quantum gate, in the sense that it is the quantum gate which can tolerate the most depolarizing noise and still generate entanglement. Our results enable us to place several analytic upper bounds on the value of the threshold for quantum computation, with the best bound in the most pessimistic error model being p(th)less than or equal to0.5.
Resumo:
Dynamic spectrum management (DSM) comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL) networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.
Resumo:
We investigate decoherence effects in the recently suggested quantum-computation scheme using weak nonlinearities, strong probe coherent fields, detection, and feedforward methods. It is shown that in the weak-nonlinearity-based quantum gates, decoherence in nonlinear media can be made arbitrarily small simply by using arbitrarily strong probe fields, if photon-number-resolving detection is used. On the contrary, we find that homodyne detection with feedforward is not appropriate for this scheme because in this case decoherence rapidly increases as the probe field gets larger.
Resumo:
The robustness of mathematical models for biological systems is studied by sensitivity analysis and stochastic simulations. Using a neural network model with three genes as the test problem, we study robustness properties of synthesis and degradation processes. For single parameter robustness, sensitivity analysis techniques are applied for studying parameter variations and stochastic simulations are used for investigating the impact of external noise. Results of sensitivity analysis are consistent with those obtained by stochastic simulations. Stochastic models with external noise can be used for studying the robustness not only to external noise but also to parameter variations. For external noise we also use stochastic models to study the robustness of the function of each gene and that of the system.
Resumo:
Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.
Resumo:
The level set method has been implemented in a computational volcanology context. New techniques are presented to solve the advection equation and the reinitialisation equation. These techniques are based upon an algorithm developed in the finite difference context, but are modified to take advantage of the robustness of the finite element method. The resulting algorithm is tested on a well documented Rayleigh–Taylor instability benchmark [19], and on an axisymmetric problem where the analytical solution is known. Finally, the algorithm is applied to a basic study of lava dome growth.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Objective: Existing evidence suggests that family interventions can be effective in reducing relapse rates in schizophrenia and related conditions. Despite this, such interventions are not routinely delivered in Australian mental health services. The objective of the current study is to investigate the incremental cost-effectiveness ratios (ICERs) of introducing three types of family interventions, namely: behavioural family management (BFM); behavioural intervention for families (BIF); and multiple family groups (MFG) into current mental health services in Australia. Method: The ICER of each of the family interventions is assessed from a health sector perspective, including the government, persons with schizophrenia and their families/carers using a standardized methodology. A two-stage approach is taken to the assessment of benefit. The first stage involves a quantitative analysis based on disability-adjusted life years (DALYs) averted. The second stage involves application of 'second filter' criteria (including equity, strength of evidence, feasibility and acceptability to stakeholders) to results. The robustness of results is tested using multivariate probabilistic sensitivity analysis. Results: The most cost-effective intervention, in order of magnitude, is BIF (A$8000 per DALY averted), followed by MFG (A$21 000 per DALY averted) and lastly BFM (A$28 000 per DALY averted). The inclusion of time costs makes BFM more cost-effective than MFG. Variation of discount rate has no effect on conclusions. Conclusions: All three interventions are considered 'value-for-money' within an Australian context. This conclusion needs to be tempered against the methodological challenge of converting clinical outcomes into a generic economic outcome measure (DALY). Issues surrounding the feasibility of routinely implementing such interventions need to be addressed.
Resumo:
In a program of laboratory and field research over the last decade, the author has replicated and extended the attribution model of leadership (Green & Mitchell, 1979). This paper reports a cross-national test of the model, in which 172 Australian and 144 Canadian work supervisors' recalled their attributional and evaluative responses to high and low levels of subordinate performance. It was expected that the supervisors' responses would conform to the predictions established in the earlier studies, but that there would be key differences across the cultures. In particular, Australians were expected to endorse more internal attributions for subordinate performance than Canadians, and to focus more on individual characteristics in evaluating performance. Results supported the model's robustness and the hypothesised cross-national differences. The implications of these results are discussed in terms of crosscultural research opportunities, and the need to take account of small but potentially important differences in supervisory styles across cultures.