45 resultados para Invariant integrals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

What kinds of motion can occur in classical mechanics? We address this question by looking at the structures traced out by trajectories in phase space; the most orderly, completely integrable systems are characterized by phase trajectories confined to low-dimensional, invariant tori. The KAM theory examines what happens to the tori when an integrable system is subjected to a small perturbation and finds that, for small enough perturbations, most of them survive.

The KAM theory is mute about the disrupted tori, but, for two-dimensional systems, Aubry and Mather discovered an astonishing picture: the broken tori are replaced by "cantori," tattered, Cantor-set remnants of the original invariant curves. We seek to extend Aubry and Mather's picture to higher dimensional systems and report two kinds of studies; both concern perturbations of a completely integrable, four-dimensional symplectic map. In the first study we compute some numerical approximations to Birkhoff periodic orbits; sequences of such orbits should approximate any higher dimensional analogs of the cantori. In the second study we prove converse KAM theorems; that is, we use a combination of analytic arguments and rigorous, machine-assisted computations to find perturbations so large that no KAM tori survive. We are able to show that the last few of our Birkhoff orbits exist in a regime where there are no tori.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.

In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.

We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.

Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.

For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interpretation of extracellular cues leading to the polarization of intracellular components and asymmetric cell divisions is a fundamental part of metazoan organogenesis. The C. elegans vulva, with its invariant cell lineage and interaction of multiple cell signaling pathways, provides an excellent model for the study of cell polarity within an organized epithelial tissue. Herein I discuss the interaction of Wnt and FGF signaling in controlling vulval cell lineage polarity with emphasis on the posterior-most cell that forms the vulva, P7.p.

The mirror symmetry of the C. elegans vulva is achieved by the opposite division orientation of the vulval precursor cells (VPCs) flanking the axis of symmetry. Opposing Wnt signals control the division patterns of the VPCs by controlling the localization of SYS-1/ β-catenin toward the direction of the Wnt gradient. Multiple Wnt signals, expressed at the axis of symmetry, promote the wild-type, anterior-facing, P7.p orientation, whereas Wnts EGL-20 and CWN-1 from the tail and posterior body wall muscle, respectively, promote the daughter cells of P7.p to face the posterior. EGL-20 acts through a member of the LDL receptor superfamily, LRP-2, along with Ror/CAM-1 and Van Gogh/VANG-1. All three transmembrane proteins control orientation through the localization of the SYS-1.

The Fibroblast Growth Factor (FGF) pathway acts in concert with LIN-17/Frizzled to regulate the localization of SYS-1. The source of the FGF ligand is the 1° VPC, P6.p, which controls the polarity of the neighboring 2° VPC, P7.p, by signaling through the sex myoblasts (SMs), activating the FGF pathway. The Wnt, cwn-1, is expressed in the posterior body wall muscle of the worm as well as the SMs, making it the only Wnt expressed on the posterior and anterior sides of P7.p at the time of the polarity decision. Both sources of cwn-1 act instructively to influence P7.p polarity in the direction of the Wnt gradient. The FGF pathway leads to the regulation of cwn-1 transcripts in the SMs. These results illustrate the first evidence of the interaction between FGF and Wnt in C. elegans development and vulval cell lineage polarity as well as highlight the promiscuous nature of Wnt signaling within C. elegans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chapter I

Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.

Chapter II

A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.

EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.

EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.

Chapter III

A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An understanding of the mechanics of nanoscale metals and semiconductors is necessary for the safe and prolonged operation of nanostructured devices from transistors to nanowire- based solar cells to miniaturized electrodes. This is a fascinating but challenging pursuit because mechanical properties that are size-invariant in conventional materials, such as strength, ductility and fracture behavior, can depend critically on sample size when materials are reduced to sub- micron dimensions. In this thesis, the effect of nanoscale sample size, microstructure and structural geometry on mechanical strength, deformation and fracture are explored for several classes of solid materials. Nanocrystalline platinum nano-cylinders with diameters of 60 nm to 1 μm and 12 nm sized grains are fabricated and tested in compression. We find that nano-sized metals containing few grains weaken as sample diameter is reduced relative to grain size due to a change from deformation governed by internal grains to surface grain governed deformation. Fracture at the nanoscale is explored by performing in-situ SEM tension tests on nanocrystalline platinum and amorphous, metallic glass nano-cylinders containing purposely introduced structural flaws. It is found that failure location, mechanism and strength are determined by the stress concentration with the highest local stress whether this is at the structural flaw or a microstructural feature. Principles of nano-mechanics are used to design and test mechanically robust hierarchical nanostructures with structural and electrochemical applications. 2-photon lithography and electroplating are used to fabricate 3D solid Cu octet meso-lattices with micron- scale features that exhibit strength higher than that of bulk Cu. An in-situ SEM lithiation stage is developed and used to simultaneously examine morphological and electrochemical changes in Si-coated Cu meso-lattices that are of interest as high energy capacity electrodes for Li-ion batteries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in nano-scale mechanical testing have brought about progress in the understanding of physical phenomena in materials and a measure of control in the fabrication of novel materials. In contrast to bulk materials that display size-invariant mechanical properties, sub-micron metallic samples show a critical dependence on sample size. The strength of nano-scale single crystalline metals is well-described by a power-law function, σαD-n, where D is a critical sample size and n is a experimentally-fit positive exponent. This relationship is attributed to source-driven plasticity and demonstrates a strengthening as the decreasing sample size begins to limit the size and number of dislocation sources. A full understanding of this size-dependence is complicated by the presence of microstructural features such as interfaces that can compete with the dominant dislocation-based deformation mechanisms. In this thesis, the effects of microstructural features such as grain boundaries and anisotropic crystallinity on nano-scale metals are investigated through uniaxial compression testing. We find that nano-sized Cu covered by a hard coating displays a Bauschinger effect and the emergence of this behavior can be explained through a simple dislocation-based analytic model. Al nano-pillars containing a single vertically-oriented coincident site lattice grain boundary are found to show similar deformation to single-crystalline nano-pillars with slip traces passing through the grain boundary. With increasing tilt angle of the grain boundary from the pillar axis, we observe a transition from dislocation-dominated deformation to grain boundary sliding. Crystallites are observed to shear along the grain boundary and molecular dynamics simulations reveal a mechanism of atomic migration that accommodates boundary sliding. We conclude with an analysis of the effects of inherent crystal anisotropy and alloying on the mechanical behavior of the Mg alloy, AZ31. Through comparison to pure Mg, we show that the size effect dominates the strength of samples below 10 μm, that differences in the size effect between hexagonal slip systems is due to the inherent crystal anisotropy, suggesting that the fundamental mechanism of the size effect in these slip systems is the same.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Part I, we construct a symmetric stress-energy-momentum pseudo-tensor for the gravitational fields of Brans-Dicke theory, and use this to establish rigorously conserved integral expressions for energy-momentum Pi and angular momentum Jik. Application of the two-dimensional surface integrals to the exact static spherical vacuum solution of Brans leads to an identification of our conserved mass with the active gravitational mass. Application to the distant fields of an arbitrary stationary source reveals that Pi and Jik have the same physical interpretation as in general relativity. For gravitational waves whose wavelength is small on the scale of the background radius of curvature, averaging over several wavelengths in the Brill-Hartle-Isaacson manner produces a stress-energy-momentum tensor for gravitational radiation which may be used to calculate the changes in Pi and Jik of their source.

In Part II, we develop strong evidence in favor of a conjecture by Penrose--that, in the Brans-Dicke theory, relativistic gravitational collapse in three dimensions produce black holes identical to those of general relativity. After pointing out that any black hole solution of general relativity also satisfies Brans-Dicke theory, we establish the Schwarzschild and Kerr geometries as the only possible spherical and axially symmetric black hole exteriors, respectively. Also, we show that a Schwarzschild geometry is necessarily formed in the collapse of an uncharged sphere.

Appendices discuss relationships among relativistic gravity theories and an example of a theory in which black holes do not exist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The number, symmetry, and product-forming capabilities of the intermediates in the photoinitiated reductions of endo- and exo-5- bromonorbornene and 2-bromonortricyclene with tri-n-butyltin hydride at temperatures between -10° and 22° were investigated.

Three mechanisms were evaluated:

1. The 5-norbornenyl- and 2-nortricyclyl radicals isomerize reversibly with the former producing nortricyclene by abstraction of hydrogen from tri-n-butyltin hydride.

2. The 5-norbornenyl- and 2-nortricyclyl radicals isomerize reversibly, but some norbornene can be formed from the 2-nortricyclyl radical or some nortricyclene can be formed from the 5-norbornenyl radical by abstraction of hydrogen.

3. There is intervention of a "bridged" radical which may be for med reversibly or irreversibly from the 5-norbornenyl- and 2-nortricyclyl radicals.

Within small error limits, the ratios of norbornene to nortricyclene as a function of the concentration of tri-n-butyltin hydride are consistent with the first mechanism.

In the reductions with tri-n-butyltin deuteride, primary deuterium isotope effects of 2. 3 and 2. 1 for the abstraction of deuterium by the 2-nortricyclyl- and 5-norbornenyl radicals, respectively, were found. The primary deuterium isotope effects were invariant with the concentration of tri-n-butyltin deuteride, although the ratios of norbornene to nortricyclene changed appreciably over this range. This is consistent with the first mechanism, and can accommodate the formation of either product from more than one intermediate only if the primary kinetic deuterium isotope effects are nearly equal for all reactions leading to the single product.

The reduction of endo-5-bromonorbornene-5, 6, 6-d3 with tri-n-butyltin hydride or tri-n-butyltin deuteride leads to both unrearranged and rearranged norbornenes. The ratios of unrearranged to rearranged norbornene require that the 5-norbornenyl-5, 6, 6-d3 radical isomerize to an intermediate with the symmetry expected of a nortricyclyl free radical. The results are consistent with mechanism 1, but imply a surprising normal secondary kinetic deuterium isotope effect of about 1.25 for the abstraction of hydrogen by the 5-norbornenyl- 5, 6, 6-d3 radical.

Approximate calculations show that there does not appear to be any substantial difference in the stabilities of the 5-norbornenyl and 2-nortricyclyl radicals.

Although the results can not exclude a small contribution by a mechanism other than mechanism 1, no such contribution is required to adequately explain the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis an extensive study is made of the set P of all paranormal operators in B(H), the set of all bounded endomorphisms on the complex Hilbert space H. T ϵ B(H) is paranormal if for each z contained in the resolvent set of T, d(z, σ(T))//(T-zI)-1 = 1 where d(z, σ(T)) is the distance from z to σ(T), the spectrum of T. P contains the set N of normal operators and P contains the set of hyponormal operators. However, P is contained in L, the set of all T ϵ B(H) such that the convex hull of the spectrum of T is equal to the closure of the numerical range of T. Thus, NPL.

If the uniform operator (norm) topology is placed on B(H), then the relative topological properties of N, P, L can be discussed. In Section IV, it is shown that: 1) N P and L are arc-wise connected and closed, 2) N, P, and L are nowhere dense subsets of B(H) when dim H ≥ 2, 3) N = P when dimH ˂ ∞ , 4) N is a nowhere dense subset of P when dimH ˂ ∞ , 5) P is not a nowhere dense subset of L when dimH ˂ ∞ , and 6) it is not known if P is a nowhere dense subset of L when dimH ˂ ∞.

The spectral properties of paranormal operators are of current interest in the literature. Putnam [22, 23] has shown that certain points on the boundary of the spectrum of a paranormal operator are either normal eigenvalues or normal approximate eigenvalues. Stampfli [26] has shown that a hyponormal operator with countable spectrum is normal. However, in Theorem 3.3, it is shown that a paranormal operator T with countable spectrum can be written as the direct sum, N ⊕ A, of a normal operator N with σ(N) = σ(T) and of an operator A with σ(A) a subset of the derived set of σ(T). It is then shown that A need not be normal. If we restrict the countable spectrum of T ϵ P to lie on a C2-smooth rectifiable Jordan curve Go, then T must be normal [see Theorem 3.5 and its Corollary]. If T is a scalar paranormal operator with countable spectrum, then in order to conclude that T is normal the condition of σ(T) ≤ Go can be relaxed [see Theorem 3.6]. In Theorem 3.7 it is then shown that the above result is not true when T is not assumed to be scalar. It was then conjectured that if T ϵ P with σ(T) ≤ Go, then T is normal. The proof of Theorem 3.5 relies heavily on the assumption that T has countable spectrum and cannot be generalized. However, the corollary to Theorem 3.9 states that if T ϵ P with σ(T) ≤ Go, then T has a non-trivial lattice of invariant subspaces. After the completion of most of the work on this thesis, Stampfli [30, 31] published a proof that a paranormal operator T with σ(T) ≤ Go is normal. His proof uses some rather deep results concerning numerical ranges whereas the proof of Theorem 3.5 uses relatively elementary methods.