907 resultados para disordered systems (theory)
Resumo:
I. Trimesic acid (1, 3, 5-benzenetricarboxylic acid) crystallizes with a monoclinic unit cell of dimensions a = 26.52 A, b = 16.42 A, c = 26.55 A, and β = 91.53° with 48 molecules /unit cell. Extinctions indicated a space group of Cc or C2/c; a satisfactory structure was obtained in the latter with 6 molecules/asymmetric unit - C54O36H36 with a formula weight of 1261 g. Of approximately 12,000 independent reflections within the CuKα sphere, intensities of 11,563 were recorded visually from equi-inclination Weissenberg photographs.
The structure was solved by packing considerations aided by molecular transforms and two- and three-dimensional Patterson functions. Hydrogen positions were found on difference maps. A total of 978 parameters were refined by least squares; these included hydrogen parameters and anisotropic temperature factors for the C and O atoms. The final R factor was 0.0675; the final "goodness of fit" was 1.49. All calculations were carried out on the Caltech IBM 7040-7094 computer using the CRYRM Crystallographic Computing System.
The six independent molecules fall into two groups of three nearly parallel molecules. All molecules are connected by carboxylto- carboxyl hydrogen bond pairs to form a continuous array of sixmolecule rings with a chicken-wire appearance. These arrays bend to assume two orientations, forming pleated sheets. Arrays in different orientations interpenetrate - three molecules in one orientation passing through the holes of three parallel arrays in the alternate orientation - to produce a completely interlocking network. One third of the carboxyl hydrogen atoms were found to be disordered.
II. Optical transforms as related to x-ray diffraction patterns are discussed with reference to the theory of Fraunhofer diffraction.
The use of a systems approach in crystallographic computing is discussed with special emphasis on the way in which this has been done at the California Institute of Technology.
An efficient manner of calculating Fourier and Patterson maps on a digital computer is presented. Expressions for the calculation of to-scale maps for standard sections and for general-plane sections are developed; space-group-specific expressions in a form suitable for computers are given for all space groups except the hexagonal ones.
Expressions for the calculation of settings for an Eulerian-cradle diffractometer are developed for both the general triclinic case and the orthogonal case.
Photographic materials on pp. 4, 6, 10, and 20 are essential and will not reproduce clearly on Xerox copies. Photographic copies should be ordered.
Resumo:
The high computational cost of correlated wavefunction theory (WFT) calculations has motivated the development of numerous methods to partition the description of large chemical systems into smaller subsystem calculations. For example, WFT-in-DFT embedding methods facilitate the partitioning of a system into two subsystems: a subsystem A that is treated using an accurate WFT method, and a subsystem B that is treated using a more efficient Kohn-Sham density functional theory (KS-DFT) method. Representation of the interactions between subsystems is non-trivial, and often requires the use of approximate kinetic energy functionals or computationally challenging optimized effective potential calculations; however, it has recently been shown that these challenges can be eliminated through the use of a projection operator. This dissertation describes the development and application of embedding methods that enable accurate and efficient calculation of the properties of large chemical systems.
Chapter 1 introduces a method for efficiently performing projection-based WFT-in-DFT embedding calculations on large systems. This is accomplished by using a truncated basis set representation of the subsystem A wavefunction. We show that naive truncation of the basis set associated with subsystem A can lead to large numerical artifacts, and present an approach for systematically controlling these artifacts.
Chapter 2 describes the application of the projection-based embedding method to investigate the oxidative stability of lithium-ion batteries. We study the oxidation potentials of mixtures of ethylene carbonate (EC) and dimethyl carbonate (DMC) by using the projection-based embedding method to calculate the vertical ionization energy (IE) of individual molecules at the CCSD(T) level of theory, while explicitly accounting for the solvent using DFT. Interestingly, we reveal that large contributions to the solvation properties of DMC originate from quadrupolar interactions, resulting in a much larger solvent reorganization energy than that predicted using simple dielectric continuum models. Demonstration that the solvation properties of EC and DMC are governed by fundamentally different intermolecular interactions provides insight into key aspects of lithium-ion batteries, with relevance to electrolyte decomposition processes, solid-electrolyte interphase formation, and the local solvation environment of lithium cations.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
The dispersion compensation characteristics of the chirped fiber grating (CFG) for different dispersion compensation positions are analyzed in externally modulated cable television (CATV) lightwave system and the analytic expression of the composite second order (CSO) distortion is derived. The analyses give a reasonable explanation for the position-dependent effect of CFG dispersion compensator, which was found in practical systems. Moreover, the theoretical result is also verified by an experiment. It is believed that the theory will be helpful in designing optical CATV fiber links with nodes at proper positions both for intensity amplification and dispersion compensation.
Resumo:
The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.
This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.
Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).
Resumo:
I. The binding of the intercalating dye ethidium bromide to closed circular SV 40 DNA causes an unwinding of the duplex structure and a simultaneous and quantitatively equivalent unwinding of the superhelices. The buoyant densities and sedimentation velocities of both intact (I) and singly nicked (II) SV 40 DNAs were measured as a function of free dye concentration. The buoyant density data were used to determine the binding isotherms over a dye concentration range extending from 0 to 600 µg/m1 in 5.8 M CsCl. At high dye concentrations all of the binding sites in II, but not in I, are saturated. At free dye concentrations less than 5.4 µg/ml, I has a greater affinity for dye than II. At a critical amount of dye bound I and II have equal affinities, and at higher dye concentration I has a lower affinity than II. The number of superhelical turns, τ, present in I is calculated at each dye concentration using Fuller and Waring's (1964) estimate of the angle of duplex unwinding per intercalation. The results reveal that SV 40 DNA I contains about -13 superhelical turns in concentrated salt solutions.
The free energy of superhelix formation is calculated as a function of τ from a consideration of the effect of the superhelical turns upon the binding isotherm of ethidium bromide to SV 40 DNA I. The value of the free energy is about 100 kcal/mole DNA in the native molecule. The free energy estimates are used to calculate the pitch and radius of the superhelix as a function of the number of superhelical turns. The pitch and radius of the native I superhelix are 430 Å and 135 Å, respectively.
A buoyant density method for the isolation and detection of closed circular DNA is described. The method is based upon the reduced binding of the intercalating dye, ethidium bromide, by closed circular DNA. In an application of this method it is found that HeLa cells contain in addition to closed circular mitochondrial DNA of mean length 4.81 microns, a heterogeneous group of smaller DNA molecules which vary in size from 0.2 to 3.5 microns and a paucidisperse group of multiples of the mitochondrial length.
II. The general theory is presented for the sedimentation equilibrium of a macromolecule in a concentrated binary solvent in the presence of an additional reacting small molecule. Equations are derived for the calculation of the buoyant density of the complex and for the determination of the binding isotherm of the reagent to the macrospecies. The standard buoyant density, a thermodynamic function, is defined and the density gradients which characterize the four component system are derived. The theory is applied to the specific cases of the binding of ethidium bromide to SV 40 DNA and of the binding of mercury and silver to DNA.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
The role of life-history theory in population and evolutionary analyses is outlined. In both cases general life histories can be analysed, but simpler life histories need fewer parameters for their description. The simplest case, of semelparous (breed-once-then-die) organisms, needs only three parameters: somatic growth rate, mortality rate and fecundity. This case is analysed in detail. If fecundity is fixed, population growth rate can be calculated direct from mortality rate and somatic growth rate, and isoclines on which population growth rate is constant can be drawn in a ”state space” with axes for mortality rate and somatic growth rate. In this space density-dependence is likely to result in a population trajectory from low density, when mortality rate is low and somatic growth rate is high and the population increases (positive population growth rate) to high density, after which the process reverses to return to low density. Possible effects of pollution on this system are discussed. The state-space approach allows direct population analysis of the twin effects of pollution and density on population growth rate. Evolutionary analysis uses related methods to identify likely evolutionary outcomes when an organism's genetic options are subject to trade-offs. The trade-off considered here is between somatic growth rate and mortality rate. Such a trade-off could arise because of an energy allocation trade-off if resources spent on personal defence (reducing mortality rate) are not available for somatic growth rate. The evolutionary implications of pollution acting on such a trade-off are outlined.
Resumo:
Learning to perceive is faced with a classical paradox: if understanding is required for perception, how can we learn to perceive something new, something we do not yet understand? According to the sensorimotor approach, perception involves mastery of regular sensorimotor co-variations that depend on the agent and the environment, also known as the "laws" of sensorimotor contingencies (SMCs). In this sense, perception involves enacting relevant sensorimotor skills in each situation. It is important for this proposal that such skills can be learned and refined with experience and yet up to this date, the sensorimotor approach has had no explicit theory of perceptual learning. The situation is made more complex if we acknowledge the open-ended nature of human learning. In this paper we propose Piaget's theory of equilibration as a potential candidate to fulfill this role. This theory highlights the importance of intrinsic sensorimotor norms, in terms of the closure of sensorimotor schemes. It also explains how the equilibration of a sensorimotor organization faced with novelty or breakdowns proceeds by re-shaping pre-existing structures in coupling with dynamical regularities of the world. This way learning to perceive is guided by the equilibration of emerging forms of skillful coping with the world. We demonstrate the compatibility between Piaget's theory and the sensorimotor approach by providing a dynamical formalization of equilibration to give an explicit micro-genetic account of sensorimotor learning and, by extension, of how we learn to perceive. This allows us to draw important lessons in the form of general principles for open-ended sensorimotor learning, including the need for an intrinsic normative evaluation by the agent itself. We also explore implications of our micro-genetic account at the personal level.
Resumo:
131 p.