9 resultados para Exact computation

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various families of exact solutions to the Einstein and Einstein-Maxwell field equations of General Relativity are treated for situations of sufficient symmetry that only two independent variables arise. The mathematical problem then reduces to consideration of sets of two coupled nonlinear differential equations.

The physical situations in which such equations arise include: a) the external gravitational field of an axisymmetric, uncharged steadily rotating body, b) cylindrical gravitational waves with two degrees of freedom, c) colliding plane gravitational waves, d) the external gravitational and electromagnetic fields of a static, charged axisymmetric body, and e) colliding plane electromagnetic and gravitational waves. Through the introduction of suitable potentials and coordinate transformations, a formalism is presented which treats all these problems simultaneously. These transformations and potentials may be used to generate new solutions to the Einstein-Maxwell equations from solutions to the vacuum Einstein equations, and vice-versa.

The calculus of differential forms is used as a tool for generation of similarity solutions and generalized similarity solutions. It is further used to find the invariance group of the equations; this in turn leads to various finite transformations that give new, physically distinct solutions from old. Some of the above results are then generalized to the case of three independent variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I, a method for finding solutions of certain diffusive dispersive nonlinear evolution equations is introduced. The method consists of a straightforward iteration procedure, applied to the equation as it stands (in most cases), which can be carried out to all terms, followed by a summation of the resulting infinite series, sometimes directly and other times in terms of traces of inverses of operators in an appropriate space.

We first illustrate our method with Burgers' and Thomas' equations, and show how it quickly leads to the Cole-Hopft transformation, which is known to linearize these equations.

We also apply this method to the Korteweg and de Vries, nonlinear (cubic) Schrödinger, Sine-Gordon, modified KdV and Boussinesq equations. In all these cases the multisoliton solutions are easily obtained and new expressions for some of them follow. More generally we show that the Marcenko integral equations, together with the inverse problem that originates them, follow naturally from our expressions.

Only solutions that are small in some sense (i.e., they tend to zero as the independent variable goes to ∞) are covered by our methods. However, by the study of the effect of writing the initial iterate u_1 = u_(1)(x,t) as a sum u_1 = ^∼/u_1 + ^≈/u_1 when we know the solution which results if u_1 = ^∼/u_1, we are led to expressions that describe the interaction of two arbitrary solutions, only one of which is small. This should not be confused with Backlund transformations and is more in the direction of performing the inverse scattering over an arbitrary “base” solution. Thus we are able to write expressions for the interaction of a cnoidal wave with a multisoliton in the case of the KdV equation; these expressions are somewhat different from the ones obtained by Wahlquist (1976). Similarly, we find multi-dark-pulse solutions and solutions describing the interaction of envelope-solitons with a uniform wave train in the case of the Schrodinger equation.

Other equations tractable by our method are presented. These include the following equations: Self-induced transparency, reduced Maxwell-Bloch, and a two-dimensional nonlinear Schrodinger. Higher order and matrix-valued equations with nonscalar dispersion functions are also presented.

In Part II, the second Painleve transcendent is treated in conjunction with the similarity solutions of the Korteweg-de Vries equat ion and the modified Korteweg-de Vries equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detailed pulsed neutron measurements have been performed in graphite assemblies ranging in size from 30.48 cm x 38.10 cm x 38.10 cm to 91.44 cm x 66.67 cm x 66.67 cm. Results of the measurement have been compared to a modeled theoretical computation.

In the first set of experiments, we measured the effective decay constant of the neutron population in ten graphite stacks as a function of time after the source burst. We found the decay to be non-exponential in the six smallest assemblies, while in three larger assemblies the decay was exponential over a significant portion of the total measuring interval. The decay in the largest stack was exponential over the entire ten millisecond measuring interval. The non-exponential decay mode occurred when the effective decay constant exceeded 1600 sec^( -1).

In a second set of experiments, we measured the spatial dependence of the neutron population in four graphite stacks as a function of time after the source pulse. By doing an harmonic analysis of the spatial shape of the neutron distribution, we were able to compute the effective decay constants of the first two spatial modes. In addition, we were able to compute the time dependent effective wave number of neutron distribution in the stacks.

Finally, we used a Laplace transform technique and a simple modeled scattering kernel to solve a diffusion equation for the time and energy dependence of the neutron distribution in the graphite stacks. Comparison of these theoretical results with the results of the first set of experiments indicated that more exact theoretical analysis would be required to adequately describe the experiments.

The implications of our experimental results for the theory of pulsed neutron experiments in polycrystalline media are discussed in the last chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A noncommutative 2-torus is one of the main toy models of noncommutative geometry, and a noncommutative n-torus is a straightforward generalization of it. In 1980, Pimsner and Voiculescu in [17] described a 6-term exact sequence, which allows for the computation of the K-theory of noncommutative tori. It follows that both even and odd K-groups of n-dimensional noncommutative tori are free abelian groups on 2n-1 generators. In 1981, the Powers-Rieffel projector was described [19], which, together with the class of identity, generates the even K-theory of noncommutative 2-tori. In 1984, Elliott [10] computed trace and Chern character on these K-groups. According to Rieffel [20], the odd K-theory of a noncommutative n-torus coincides with the group of connected components of the elements of the algebra. In particular, generators of K-theory can be chosen to be invertible elements of the algebra. In Chapter 1, we derive an explicit formula for the First nontrivial generator of the odd K-theory of noncommutative tori. This gives the full set of generators for the odd K-theory of noncommutative 3-tori and 4-tori.

In Chapter 2, we apply the graded-commutative framework of differential geometry to the polynomial subalgebra of the noncommutative torus algebra. We use the framework of differential geometry described in [27], [14], [25], [26]. In order to apply this framework to noncommutative torus, the notion of the graded-commutative algebra has to be generalized: the "signs" should be allowed to take values in U(1), rather than just {-1,1}. Such generalization is well-known (see, e.g., [8] in the context of linear algebra). We reformulate relevant results of [27], [14], [25], [26] using this extended notion of sign. We show how this framework can be used to construct differential operators, differential forms, and jet spaces on noncommutative tori. Then, we compare the constructed differential forms to the ones, obtained from the spectral triple of the noncommutative torus. Sections 2.1-2.3 recall the basic notions from [27], [14], [25], [26], with the required change of the notion of "sign". In Section 2.4, we apply these notions to the polynomial subalgebra of the noncommutative torus algebra. This polynomial subalgebra is similar to a free graded-commutative algebra. We show that, when restricted to the polynomial subalgebra, Connes construction of differential forms gives the same answer as the one obtained from the graded-commutative differential geometry. One may try to extend these notions to the smooth noncommutative torus algebra, but this was not done in this work.

A reconstruction of the Beilinson-Bloch regulator (for curves) via Fredholm modules was given by Eugene Ha in [12]. However, the proof in [12] contains a critical gap; in Chapter 3, we close this gap. More specifically, we do this by obtaining some technical results, and by proving Property 4 of Section 3.7 (see Theorem 3.9.4), which implies that such reformulation is, indeed, possible. The main motivation for this reformulation is the longer-term goal of finding possible analogs of the second K-group (in the context of algebraic geometry and K-theory of rings) and of the regulators for noncommutative spaces. This work should be seen as a necessary preliminary step for that purpose.

For the convenience of the reader, we also give a short description of the results from [12], as well as some background material on central extensions and Connes-Karoubi character.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Numerical solutions to the S-limit equations for the helium ground state and excited triplet state and the hydride ion ground state are obtained with the second and fourth difference approximations. The results for the ground states are superior to previously reported values. The coupled equations resulting from the partial wave expansion of the exact helium atom wavefunction were solved giving accurate S-, P-, D-, F-, and G-limits. The G-limit is -2.90351 a.u. compared to the exact value of the energy of -2.90372 a.u.

Part II

The pair functions which determine the exact first-order wavefunction for the ground state of the three-electron atom are found with the matrix finite difference method. The second- and third-order energies for the (1s1s)1S, (1s2s)3S, and (1s2s)1S states of the two-electron atom are presented along with contour and perspective plots of the pair functions. The total energy for the three-electron atom with a nuclear charge Z is found to be E(Z) = -1.125•Z2 +1.022805•Z-0.408138-0.025515•(1/Z)+O(1/Z2)a.u.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A variety of neural signals have been measured as correlates to consciousness. In particular, late current sinks in layer 1, distributed activity across the cortex, and feedback processing have all been implicated. What are the physiological underpinnings of these signals? What computational role do they play in the brain? Why do they correlate to consciousness? This thesis begins to answer these questions by focusing on the pyramidal neuron. As the primary communicator of long-range feedforward and feedback signals in the cortex, the pyramidal neuron is set up to play an important role in establishing distributed representations. Additionally, the dendritic extent, reaching layer 1, is well situated to receive feedback inputs and contribute to current sinks in the upper layers. An investigation of pyramidal neuron physiology is therefore necessary to understand how the brain creates, and potentially uses, the neural correlates of consciousness. An important part of this thesis will be in establishing the computational role that dendritic physiology plays. In order to do this, a combined experimental and modeling approach is used.

This thesis beings with single-cell experiments in layer 5 and layer 2/3 pyramidal neurons. In both cases, dendritic nonlinearities are characterized and found to be integral regulators of neural output. Particular attention is paid to calcium spikes and NMDA spikes, which both exist in the apical dendrites, considerable distances from the spike initiation zone. These experiments are then used to create detailed multicompartmental models. These models are used to test hypothesis regarding spatial distribution of membrane channels, to quantify the effects of certain experimental manipulations, and to establish the computational properties of the single cell. We find that the pyramidal neuron physiology can carry out a coincidence detection mechanism. Further abstraction of these models reveals potential mechanisms for spike time control, frequency modulation, and tuning. Finally, a set of experiments are carried out to establish the effect of long-range feedback inputs onto the pyramidal neuron. A final discussion then explores a potential way in which the physiology of pyramidal neurons can establish distributed representations, and contribute to consciousness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.