80 resultados para Mixed-integer linear programing
Resumo:
Pollution by polycyclic aromatic hydrocarbons(PAHs) is widespread due to unsuitable disposal of industrial waste. They are mostly defined as priority pollutants by environmental protection authorities worldwide. Phenanthrene, a typical PAH, was selected as the target in this paper. The PAH-degrading mixed culture, named ZM, was collected from a petroleum contaminated river bed. This culture was injected into phenanthrene solutions at different concentrations to quantify the biodegradation process. Results show near-complete removal of phenanthrene in three days of biodegradation if the initial phenanthrene concentration is low. When the initial concentration is high, the removal rate is increased but 20%-40% of the phenanthrene remains at the end of the experiment. The biomass shows a peak on the third day due to the combined effects of microbial growth and decay. Another peak is evident for cases with a high initial concentration, possibly due to production of an intermediate metabolite. The pH generally decreased during biodegradation because of the production of organic acid. Two phenomenological models were designed to simulate the phenanthrene biodegradation and biomass growth. A relatively simple model that does not consider the intermediate metabolite and its inhibition of phenanthrene biodegradation cannot fit the observed data. A modified Monod model that considered an intermediate metabolite (organic acid) and its inhibiting reversal effect reasonably depicts the experimental results.
Resumo:
The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
We consider algorithms for computing the Smith normal form of integer matrices. A variety of different strategies have been proposed, primarily aimed at avoiding the major obstacle that occurs in such computations-explosive growth in size of intermediate entries. We present a new algorithm with excellent performance. We investigate the complexity of such computations, indicating relationships with NP-complete problems. We also describe new heuristics which perform well in practice. Wie present experimental evidence which shows our algorithm outperforming previous methods. (C) 1997 Academic Press Limited.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
Ussing [1] considered the steady flux of a single chemical component diffusing through a membrane under the influence of chemical potentials and derived from his linear model, an expression for the ratio of this flux and that of the complementary experiment in which the boundary conditions were interchanged. Here, an extension of Ussing's flux ratio theorem is obtained for n chemically interacting components governed by a linear system of diffusion-migration equations that may also incorporate linear temporary trapping reactions. The determinants of the output flux matrices for complementary experiments are shown to satisfy an Ussing flux ratio formula for steady state conditions of the same form as for the well-known one-component case. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Any given n X n matrix A is shown to be a restriction, to the A-invariant subspace, of a nonnegative N x N matrix B of spectral radius p(B) arbitrarily close to p(A). A difference inclusion x(k+1) is an element of Ax(k), where A is a compact set of matrices, is asymptotically stable if and only if A can be extended to a set B of nonnegative matrices B with \ \B \ \ (1) < 1 or \ \B \ \ (infinity) < 1. Similar results are derived for differential inclusions.