910 resultados para Set theory.
Resumo:
This article is concerned with the evolution of haploid organisms that reproduce asexually. In a seminal piece of work, Eigen and coauthors proposed the quasispecies model in an attempt to understand such an evolutionary process. Their work has impacted antiviral treatment and vaccine design strategies. Yet, predictions of the quasispecies model are at best viewed as a guideline, primarily because it assumes an infinite population size, whereas realistic population sizes can be quite small. In this paper we consider a population genetics-based model aimed at understanding the evolution of such organisms with finite population sizes and present a rigorous study of the convergence and computational issues that arise therein. Our first result is structural and shows that, at any time during the evolution, as the population size tends to infinity, the distribution of genomes predicted by our model converges to that predicted by the quasispecies model. This justifies the continued use of the quasispecies model to derive guidelines for intervention. While the stationary state in the quasispecies model is readily obtained, due to the explosion of the state space in our model, exact computations are prohibitive. Our second set of results are computational in nature and address this issue. We derive conditions on the parameters of evolution under which our stochastic model mixes rapidly. Further, for a class of widely used fitness landscapes we give a fast deterministic algorithm which computes the stationary distribution of our model. These computational tools are expected to serve as a framework for the modeling of strategies for the deployment of mutagenic drugs.
Resumo:
Ampcalculator (AMPC) is a Mathematica (c) based program that was made publicly available some time ago by Unterdorfer and Ecker. It enables the user to compute several processes at one loop (upto O(p(4))) in SU(3) chiral perturbation theory. They include computing matrix elements and form factors for strong and non-leptonic weak processes with at most six external states. It was used to compute some novel processes and was tested against well-known results by the original authors. Here we present the results of several thorough checks of the package. Exhaustive checks performed by the original authors are not publicly available, and hence the present effort. Some new results are obtained from the software especially in the kaon odd-intrinsic parity non-leptonic decay sector involving the coupling G(27). Another illustrative set of amplitudes at tree level we provide is in the context of tau-decays with several mesons including quark mass effects, of use to the BELLE experiment. All eight meson-meson scattering amplitudes have been checked. The Kaon-Compton amplitude has been checked and a minor error in the published results has been pointed out. This exercise is a tutorial-based one, wherein several input and output notebooks are also being made available as ancillary files on the arXiv. Some of the additional notebooks we provide contain explicit expressions that we have used for comparison with established results. The purpose is to encourage users to apply the software to suit their specific needs. An automatic amplitude generator of this type can provide error-free outputs that could be used as inputs for further simplification, and in varied scenarios such as applications of chiral perturbation theory at finite temperature, density and volume. This can also be used by students as a learning aid in low-energy hadron dynamics.
Resumo:
Density functional theory (DFT) calculations are being performed to investigate the geometric, vibrational, and electronic properties of the chlorogenic acid isomer 3-CQA (1R,3R,4S,5R)-3-{(2E)-3-(3,4-dihydroxyphenyl)prop-2-enoyl]oxy}-1,4, 5-trihydroxycyclohexanecarboxylic acid), a major phenolic compound in coffee. DFT calculations with the 6-311G(d,p) basis set produce very good results. The electrostatic potential mapped onto an isodensity surface has been obtained. A natural bond orbital analysis (NBO) has been performed in order to study intramolecular bonding, interactions among bonds, and delocalization of unpaired electrons. HOMO-LUMO studies give insights into the interaction of the molecule with other species. The calculated HOMO and LUMO energies indicate that a charge transfer occurs within the molecule. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
The tetrablock, roughly speaking, is the set of all linear fractional maps that map the open unit disc to itself. A formal definition of this inhomogeneous domain is given below. This paper considers triples of commuting bounded operators (A,B,P) that have the tetrablock as a spectral set. Such a triple is named a tetrablock contraction. The motivation comes from the success of model theory in another inhomogeneous domain, namely, the symmetrized bidisc F. A pair of commuting bounded operators (S,P) with Gamma as a spectral set is called a Gamma-contraction, and always has a dilation. The two domains are related intricately as the Lemma 3.2 below shows. Given a triple (A, B, P) as above, we associate with it a pair (F-1, F-2), called its fundamental operators. We show that (A,B,P) dilates if the fundamental operators F-1 and F-2 satisfy certain commutativity conditions. Moreover, the dilation space is no bigger than the minimal isometric dilation space of the contraction P. Whether these commutativity conditions are necessary, too, is not known. what we have shown is that if there is a tetrablock isometric dilation on the minimal isometric dilation space of P. then those commutativity conditions necessarily get imposed on the fundamental operators. En route, we decipher the structure of a tetrablock unitary (this is the candidate as the dilation triple) and a tertrablock isometry (the restriction of a tetrablock unitary to a joint invariant sub-space). We derive new results about r-contractions and apply them to tetrablock contractions. The methods applied are motivated by 11]. Although the calculations are lengthy and more complicated, they beautifully reveal that the dilation depends on the mutual relationship of the two fundamental operators, so that certain conditions need to be satisfied. The question of whether all tetrablock contractions dilate or not is unresolved.
Resumo:
We set up the theory of newforms of half-integral weight on Gamma(0)(8N) and Gamma(0)(16N), where N is odd and squarefree. Further, we extend the definition of the Kohnen plus space in general for trivial character and also study the theory of newforms in the plus spaces on Gamma(0)(8N), Gamma(0)(16N), where N is odd and squarefree. Finally, we show that the Atkin-Lehner W-operator W-4 acts as the identity operator on S-2k(new)(4N), where N is odd and squarefree. This proves that S-2k(-)(4) = S-2k(4).
Resumo:
We present a framework for obtaining reliable solid-state charge and optical excitations and spectra from optimally tuned range-separated hybrid density functional theory. The approach, which is fully couched within the formal framework of generalized Kohn-Sham theory, allows for the accurate prediction of exciton binding energies. We demonstrate our approach through first principles calculations of one- and two-particle excitations in pentacene, a molecular semiconducting crystal, where our work is in excellent agreement with experiments and prior computations. We further show that with one adjustable parameter, set to produce the known band gap, this method accurately predicts band structures and optical spectra of silicon and lithium fluoride, prototypical covalent and ionic solids. Our findings indicate that for a broad range of extended bulk systems, this method may provide a computationally inexpensive alternative to many-body perturbation theory, opening the door to studies of materials of increasing size and complexity.
Resumo:
Based on the theory of the pumping well test, the transient injection well test was suggested in this paper. The design method and the scope of application are discussed in detail. The mathematical models are developed for the short-time and long-time transient injection test respectively. A double logarithm type curve matching method was introduced for analyzing the field transient injection test data. A set of methods for the transient injection test design, experiment performance and data analysis were established. Some field tests were analyzed, and the results show that the test model and method are suitable for the transient injection test and can be used to deal with the real engineering problems.
Resumo:
A previously published refined shear deformation theory is used to analyse free vibration of laminated shells. The theory includes the assumption that the transverse shear strains across any two layers are linearly dependent on each other. The theory has the same dependent variables as first-order shear deformation theory, hut the set of governing differential equations is of twelfth order. No shear correction factors are required. Free vibration of symmetric cross-ply laminated cylindrical shells, symmetric and antisymmetric cross-ply cylindrical panels is calculated. The numerical results are in good agreement with those from three-dimensional elasticity theory.
Resumo:
A previously published discrete-layer shear deformation theory is used to analyze free vibration of laminated plates. The theory includes the assumption that the transverse shear strains across any two layers are linearly dependent on each other. The theory has the same dependent variables as first order shear deformation theory, but the set of governing differential equations is of twelfth order. No shear correction factors are required. Free vibration of simply supported symmetric and antisymmetric cross-ply plates is calculated. The numerical results are in good agreement with those from three-dimensional elasticity theory.
Resumo:
A new method is proposed to solve the closure problem of turbulence theory and to drive the Kolmogorov law in an Eulerian framework. Instead of using complex Fourier components of velocity field as modal parameters, a complete set of independent real parameters and dynamic equations are worked out to describe the dynamic states of a turbulence. Classical statistical mechanics is used to study the statistical behavior of the turbulence. An approximate stationary solution of the Liouville equation is obtained by a perturbation method based on a Langevin-Fokker-Planck (LFP) model. The dynamic damping coefficient eta of the LFP model is treated as an optimum control parameter to minimize the error of the perturbation solution; this leads to a convergent integral equation for eta to replace the divergent response equation of Kraichnan's direct-interaction (DI) approximation, thereby solving the closure problem without appealing to a Lagrangian formulation. The Kolmogorov constant Ko is evaluated numerically, obtaining Ko = 1.2, which is compatible with the experimental data given by Gibson and Schwartz, (1963).
Resumo:
In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.