943 resultados para Mathematics, Interdisciplinary Applications
Resumo:
The importance of the rate of change of the pollution stock in determining the damage to the environment has been an issue of increasing concern in the literature. This paper uses a three-sector (economy, population and environment), non-linear, discrete time, calibrated model to examine pollution control. The model explicitly links economic growth to the health of the environment. The stock of natural resources is affected by the rate of pollution flows, through their impact on the regenerative capacity of the natural resource stock. This can shed useful insights into pollution control strategies, particularly in developing countries where environmental resources are crucial for production in many sectors of the economy. Simulation exercises suggested that, under plausible assumptions, it is possible to reverse undesirable transient dynamics through pollution control expenditure, but this is dependent upon the strategies used for control. The best strategy is to spend money fostering the development of production technologies that reduce pollution rather than spending money dealing with the effects of the pollution flow into the environment. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, we present a new unified approach and an elementary proof of a very general theorem on the existence of a semicontinuous or continuous utility function representing a preference relation. A simple and interesting new proof of the famous Debreu Gap Lemma is given. In addition, we prove a new Gap Lemma for the rational numbers and derive some consequences. We also prove a theorem which characterizes the existence of upper semicontinuous utility functions on a preordered topological space which need not be second countable. This is a generalization of the classical theorem of Rader which only gives sufficient conditions for the existence of an upper semicontinuous utility function for second countable topological spaces. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space. We illustrate the method with reference to several models, including birth-death processes and the birth, death and catastrophe process. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
In this paper we investigate the structure of non-representable preference relations. While there is a vast literature on different kinds of preference relations that can be represented by a real-valued utility function, very little is known or understood about preference relations that cannot be represented by a real-valued utility function. There has been no systematic analysis of the non-representation problem. In this paper we give a complete description of non-representable preference relations which are total preorders or chains. We introduce and study the properties of four classes of non-representable chains: long chains, planar chains, Aronszajn-like chains and Souslin chains. In the main theorem of the paper we prove that a chain is non-representable if and only it is a long chain, a planar chain, an Aronszajn-like chain or a Souslin chain. (C) 2002 Published by Elsevier Science B.V.
Resumo:
In an earlier paper [Journal of Mathematical Economics, 37 (2002) 17-38], we proved that if a preference relation on a commodity space is non-representable by a real-valued function then that chain is necessarily a long chain, a planar chain, an Aronszajn-like chain or a Souslin chain. In this paper, we study the class of planar chains, the simplest example of which is the Debreu chain (R-2, <(l)). (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Shadowing of a dynamical system is often used to justify the validity of computer simulations of the system, and in numerical calculations an inverse form of the shadowing concept is also of some interest. In this paper we characterize the notion of shadowing in terms of stability, and express the notion of hyperbolicity using the concept of inverse shadowing.
Resumo:
We present a novel maximum-likelihood-based algorithm for estimating the distribution of alignment scores from the scores of unrelated sequences in a database search. Using a new method for measuring the accuracy of p-values, we show that our maximum-likelihood-based algorithm is more accurate than existing regression-based and lookup table methods. We explore a more sophisticated way of modeling and estimating the score distributions (using a two-component mixture model and expectation maximization), but conclude that this does not improve significantly over simply ignoring scores with small E-values during estimation. Finally, we measure the classification accuracy of p-values estimated in different ways and observe that inaccurate p-values can, somewhat paradoxically, lead to higher classification accuracy. We explain this paradox and argue that statistical accuracy, not classification accuracy, should be the primary criterion in comparisons of similarity search methods that return p-values that adjust for target sequence length.
Resumo:
Control of chaotic instability in a simplified model of a spinning spacecraft with dissipation is achieved using an algorithm derived using Lyapunov's second method. The control method is implemented on a realistic spacecraft parameter configuration which has been found to exhibit chaotic instability for a range of forcing amplitudes and frequencies when a sinusoidally varying torque is applied to the spacecraft. Such a torque, may arise in practice from an unbalanced rotor or from vibrations in appendages. Numerical simulations are performed and the results are studied by means of time history, phase space, Poincare map, Lyapunov characteristic exponents and bifurcation diagrams. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
In computer simulations of smooth dynamical systems, the original phase space is replaced by machine arithmetic, which is a finite set. The resulting spatially discretized dynamical systems do not inherit all functional properties of the original systems, such as surjectivity and existence of absolutely continuous invariant measures. This can lead to computational collapse to fixed points or short cycles. The paper studies loss of such properties in spatial discretizations of dynamical systems induced by unimodal mappings of the unit interval. The problem reduces to studying set-valued negative semitrajectories of the discretized system. As the grid is refined, the asymptotic behavior of the cardinality structure of the semitrajectories follows probabilistic laws corresponding to a branching process. The transition probabilities of this process are explicitly calculated. These results are illustrated by the example of the discretized logistic mapping.
Resumo:
This article develops a weighted least squares version of Levene's test of homogeneity of variance for a general design, available both for univariate and multivariate situations. When the design is balanced, the univariate and two common multivariate test statistics turn out to be proportional to the corresponding ordinary least squares test statistics obtained from an analysis of variance of the absolute values of the standardized mean-based residuals from the original analysis of the data. The constant of proportionality is simply a design-dependent multiplier (which does not necessarily tend to unity). Explicit results are presented for randomized block and Latin square designs and are illustrated for factorial treatment designs and split-plot experiments. The distribution of the univariate test statistic is close to a standard F-distribution, although it can be slightly underdispersed. For a complex design, the test assesses homogeneity of variance across blocks, treatments, or treatment factors and offers an objective interpretation of residual plot.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Motivation: A major issue in cell biology today is how distinct intracellular regions of the cell, like the Golgi Apparatus, maintain their unique composition of proteins and lipids. The cell differentially separates Golgi resident proteins from proteins that move through the organelle to other subcellular destinations. We set out to determine if we could distinguish these two types of transmembrane proteins using computational approaches. Results: A new method has been developed to predict Golgi membrane proteins based on their transmembrane domains. To establish the prediction procedure, we took the hydrophobicity values and frequencies of different residues within the transmembrane domains into consideration. A simple linear discriminant function was developed with a small number of parameters derived from a dataset of Type II transmembrane proteins of known localization. This can discriminate between proteins destined for Golgi apparatus or other locations (post-Golgi) with a success rate of 89.3% or 85.2%, respectively on our redundancy-reduced data sets.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.