14 resultados para Correlation matrix
em CaltechTHESIS
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.
The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.
A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.
Resumo:
Be it a physical object or a mathematical model, a nonlinear dynamical system can display complicated aperiodic behavior, or "chaos." In many cases, this chaos is associated with motion on a strange attractor in the system's phase space. And the dimension of the strange attractor indicates the effective number of degrees of freedom in the dynamical system.
In this thesis, we investigate numerical issues involved with estimating the dimension of a strange attractor from a finite time series of measurements on the dynamical system.
Of the various definitions of dimension, we argue that the correlation dimension is the most efficiently calculable and we remark further that it is the most commonly calculated. We are concerned with the practical problems that arise in attempting to compute the correlation dimension. We deal with geometrical effects (due to the inexact self-similarity of the attractor), dynamical effects (due to the nonindependence of points generated by the dynamical system that defines the attractor), and statistical effects (due to the finite number of points that sample the attractor). We propose a modification of the standard algorithm, which eliminates a specific effect due to autocorrelation, and a new implementation of the correlation algorithm, which is computationally efficient.
Finally, we apply the algorithm to chaotic data from the Caltech tokamak and the Texas tokamak (TEXT); we conclude that plasma turbulence is not a low- dimensional phenomenon.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The forces cells apply to their surroundings control biological processes such as growth, adhesion, development, and migration. In the past 20 years, a number of experimental techniques have been developed to measure such cell tractions. These approaches have primarily measured the tractions applied by cells to synthetic two-dimensional substrates, which do not mimic in vivo conditions for most cell types. Many cell types live in a fibrous three-dimensional (3D) matrix environment. While studying cell behavior in such 3D matrices will provide valuable insights for the mechanobiology and tissue engineering communities, no experimental approaches have yet measured cell tractions in a fibrous 3D matrix.
This thesis describes the development and application of an experimental technique for quantifying cellular forces in a natural 3D matrix. Cells and their surrounding matrix are imaged in three dimensions with high speed confocal microscopy. The cell-induced matrix displacements are computed from the 3D image volumes using digital volume correlation. The strain tensor in the 3D matrix is computed by differentiating the displacements, and the stress tensor is computed by applying a constitutive law. Finally, tractions applied by the cells to the matrix are computed directly from the stress tensor.
The 3D traction measurement approach is used to investigate how cells mechanically interact with the matrix in biologically relevant processes such as division and invasion. During division, a single mother cell undergoes a drastic morphological change to split into two daughter cells. In a 3D matrix, dividing cells apply tensile force to the matrix through thin, persistent extensions that in turn direct the orientation and location of the daughter cells. Cell invasion into a 3D matrix is the first step required for cell migration in three dimensions. During invasion, cells initially apply minimal tractions to the matrix as they extend thin protrusions into the matrix fiber network. The invading cells anchor themselves to the matrix using these protrusions, and subsequently pull on the matrix to propel themselves forward.
Lastly, this thesis describes a constitutive model for the 3D fibrous matrix that uses a finite element (FE) approach. The FE model simulates the fibrous microstructure of the matrix and matches the cell-induced matrix displacements observed experimentally using digital volume correlation. The model is applied to predict how cells mechanically sense one another in a 3D matrix. It is found that cell-induced matrix displacements localize along linear paths. These linear paths propagate over a long range through the fibrous matrix, and provide a mechanism for cell-cell signaling and mechanosensing. The FE model developed here has the potential to reveal the effects of matrix density, inhomogeneity, and anisotropy in signaling cell behavior through mechanotransduction.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
Isotope dilution thorium and uranium analyses of the Harleton chondrite show a larger scatter than previously observed in equilibrated ordinary chondrites (EOC). The linear correlation of Th/U with 1/U in Harleton (and all EOC data) is produced by variation in the chlorapatite to merrillite mixing ratio. Apatite variations control the U concentrations. Phosphorus variations are compensated by inverse variations in U to preserve the Th/U vs. 1/U correlation. Because the Th/U variations reflect phosphate ampling, a weighted Th/U average should converge to an improved solar system Th/U. We obtain Th/U=3.53 (1-mean=0.10), significantly lower and more precise than previous estimates.
To test whether apatite also produces Th/U variation in CI and CM chondrites, we performed P analyses on the solutions from leaching experiments of Orgueil and Murchison meteorites.
A linear Th/U vs. 1/U correlation in CI can be explained by redistribution of hexavalent U by aqueous fluids into carbonates and sulfates.
Unlike CI and EOC, whole rock Th/U variations in CMs are mostly due to Th variations. A Th/U vs. 1/U linear correlation suggested by previous data for CMs is not real. We distinguish 4 components responsible for the whole rock Th/U variations: (1) P and actinide-depleted matrix containing small amounts of U-rich carbonate/sulfate phases (similar to CIs); (2) CAIs and (3) chondrules are major reservoirs for actinides, (4) an easily leachable phase of high Th/U. likely carbonate produced by CAI alteration. Phosphates play a minor role as actinide and P carrier phases in CM chondrites.
Using our Th/U and minimum galactic ages from halo globular clusters, we calculate relative supernovae production rates for 232Th/238U and 235U/238U for different models of r-process nucleosynthesis. For uniform galactic production, the beginning of the r-process nucleosynthesis must be less than 13 Gyr. Exponentially decreasing production is also consistent with a 13 Gyr age, but very slow decay times are required (less than 35 Gyr), approaching the uniform production. The 15 Gyr Galaxy requires either a fast initial production growth (infall time constant less than 0.5 Gyr) followed by very low decrease (decay time constant greater than 100 Gyr), or the fastest possible decrease (≈8 Gyr) preceded by slow in fall (≈7.5 Gyr).
Resumo:
This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.
Resumo:
Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.
The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.
A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.
High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.
Resumo:
The synthesis of the first member of a new class of Dewar benzenes has been achieved. The synthesis of 2,3- dimethylbicyclo[2.2.0]hexa-2,5-diene-1, 4-dicarboxylic acid and its anhydride are described. Dibromomaleic anhydride and dichloroethylene were found to add efficiently in a photochemical [2+2] cycloaddition to produce 1,2-dibromo- 3,4-dichlorocyclobutane-1,2-dicarboxylic acid. Removal of the bromines with tin/copper couple yielded dichloro- cyclobutenes which added to 2-butyne under photochemical conditions to yield 5,6-dichloro-2,3-dimethylbicyclo [2.2.0] hex-2-ene dicarboxylic acids. One of the three possible isomers yielded a stable anhydride which could be dechlorinated using triphenyltin radicals generated by the photolysis of hexaphenylditin.
Photolysis of argon matrix isolated 2,3-dimethylbicyclo [2.2.0]hexa-2, 5-diene-1,4-dicarboxylic acid anhydride produced traces whose strongest bands in the infrared were at 3350 and 600 cm^(-1). This suggested the formation of terminal acetylenes. The spectra of argon matrix isolated E- and Z- 3,4-dimethylhexa-1,5-diyne-3-ene and cis-and trans-octa- 2,6-diyne-4-ene were compared with the spectrum of the photolysis products. Possibly all four diethynylethylenes were present in the anhydride photolysis products. Gas chromatograph-mass spectral analysis of the volatiles from the anhydride photolysis again suggested, but did not confirm, the presence of the diethynylethylenes.
Resumo:
The free neutron beta decay correlation A0 between neutron polarization and electron emission direction provides the strongest constraint on the ratio λ = gA/gV of the Axial-vector to Vector coupling constants in Weak decay. In conjunction with the CKM Matrix element Vud and the neutron lifetime τn, λ provides a test of Standard Model assumptions for the Weak interaction. Leading high-precision measurements of A0 and τn in the 1995-2005 time period showed discrepancies with prior measurements and Standard Model predictions for the relationship between λ, τn, and Vud. The UCNA experiment was developed to measure A0 from decay of polarized ultracold neutrons (UCN), providing a complementary determination of λ with different systematic uncertainties from prior cold neutron beam experiments. This dissertation describes analysis of the dataset collected by UCNA in 2010, with emphasis on detector response calibrations and systematics. The UCNA measurement is placed in the context of the most recent τn results and cold neutron A0 experiments.
Resumo:
Experimental studies were conducted with the goals of 1) determining the origin of Pt- group element (PGE) alloys and associated mineral assemblages in refractory inclusions from meteorites and 2) developing a new ultrasensitive method for the in situ chemical and isotopic analysis of PGE. A general review of the geochemistry and cosmochemistry of the PGE is given, and specific research contributions are presented within the context of this broad framework.
An important step toward understanding the cosmochemistry of the PGE is the determination of the origin of POE-rich metallic phases (most commonly εRu-Fe) that are found in Ca, AJ-rich refractory inclusions (CAI) in C3V meteorites. These metals occur along with γNi-Fe metals, Ni-Fe sulfides and Fe oxides in multiphase opaque assemblages. Laboratory experiments were used to show that the mineral assemblages and textures observed in opaque assemblages could be produced by sulfidation and oxidation of once homogeneous Ni-Fe-PGE metals. Phase equilibria, partitioning and diffusion kinetics were studied in the Ni-Fe-Ru system in order to quantify the conditions of opaque assemblage formation. Phase boundaries and tie lines in the Ni-Fe-Ru system were determined at 1273, 1073 and 873K using an experimental technique that allowed the investigation of a large portion of the Ni-Fe-Ru system with a single experiment at each temperature by establishing a concentration gradient within which local equilibrium between coexisting phases was maintained. A wide miscibility gap was found to be present at each temperature, separating a hexagonal close-packed εRu-Fe phase from a face-centered cubic γNi-Fe phase. Phase equilibria determined here for the Ni-Fe-Ru system, and phase equilibria from the literature for the Ni-Fe-S and Ni-Fe-O systems, were compared with analyses of minerals from opaque assemblages to estimate the temperature and chemical conditions of opaque assemblage formation. It was determined that opaque assemblages equilibrated at a temperature of ~770K, a sulfur fugacity 10 times higher than an equilibrium solar gas, and an oxygen fugacity 106 times higher than an equilibrium solar gas.
Diffusion rates between -γNi-Fe and εRu-Fe metal play a critical role in determining the time (with respect to CAI petrogenesis) and duration of the opaque assemblage equilibration process. The diffusion coefficient for Ru in Ni (DRuNi) was determined as an analog for the Ni-Fe-Ru system by the thin-film diffusion method in the temperature range of 1073 to 1673K and is given by the expression:
DRuNi (cm2 sec-1) = 5.0(±0.7) x 10-3 exp(-2.3(±0.1) x 1012 erg mole-1/RT) where R is the gas constant and T is the temperature in K. Based on the rates of dissolution and exsolution of metallic phases in the Ni-Fe-Ru system it is suggested that opaque assemblages equilibrated after the melting and crystallization of host CAI during a metamorphic event of ≥ 103 years duration. It is inferred that opaque assemblages originated as immiscible metallic liquid droplets in the CAI silicate liquid. The bulk compositions of PGE in these precursor alloys reflects an early stage of condensation from the solar nebula and the partitioning of V between the precursor alloys and CAI silicate liquid reflects the reducing nebular conditions under which CAI were melted. The individual mineral phases now observed in opaque assemblages do not preserve an independent history prior to CAI melting and crystallization, but instead provide important information on the post-accretionary history of C3V meteorites and allow the quantification of the temperature, sulfur fugacity and oxygen fugacity of cooling planetary environments. This contrasts with previous models that called upon the formation of opaque assemblages by aggregation of phases that formed independently under highly variable conditions in the solar nebula prior to the crystallization of CAI.
Analytical studies were carried out on PGE-rich phases from meteorites and the products of synthetic experiments using traditional electron microprobe x-ray analytical techniques. The concentrations of PGE in common minerals from meteorites and terrestrial rocks are far below the ~100 ppm detection limit of the electron microprobe. This has limited the scope of analytical studies to the very few cases where PGE are unusually enriched. To study the distribution of PGE in common minerals will require an in situ analytical technique with much lower detection limits than any methods currently in use. To overcome this limitation, resonance ionization of sputtered atoms was investigated for use as an ultrasensitive in situ analytical technique for the analysis of PGE. The mass spectrometric analysis of Os and Re was investigated using a pulsed primary Ar+ ion beam to provide sputtered atoms for resonance ionization mass spectrometry. An ionization scheme for Os that utilizes three resonant energy levels (including an autoionizing energy level) was investigated and found to have superior sensitivity and selectivity compared to nonresonant and one and two energy level resonant ionization schemes. An elemental selectivity for Os over Re of ≥ 103 was demonstrated. It was found that detuning the ionizing laser from the autoionizing energy level to an arbitrary region in the ionization continuum resulted in a five-fold decrease in signal intensity and a ten-fold decrease in elemental selectivity. Osmium concentrations in synthetic metals and iron meteorites were measured to demonstrate the analytical capabilities of the technique. A linear correlation between Os+ signal intensity and the known Os concentration was observed over a range of nearly 104 in Os concentration with an accuracy of ~ ±10%, a millimum detection limit of 7 parts per billion atomic, and a useful yield of 1%. Resonance ionization of sputtered atoms samples the dominant neutral-fraction of sputtered atoms and utilizes multiphoton resonance ionization to achieve high sensitivity and to eliminate atomic and molecular interferences. Matrix effects should be small compared to secondary ion mass spectrometry because ionization occurs in the gas phase and is largely independent of the physical properties of the matrix material. Resonance ionization of sputtered atoms can be applied to in situ chemical analysis of most high ionization potential elements (including all of the PGE) in a wide range of natural and synthetic materials. The high useful yield and elemental selectivity of this method should eventually allow the in situ measurement of Os isotope ratios in some natural samples and in sample extracts enriched in PGE by fire assay fusion.
Phase equilibria and diffusion experiments have provided the basis for a reinterpretation of the origin of opaque assemblages in CAI and have yielded quantitative information on conditions in the primitive solar nebula and cooling planetary environments. Development of the method of resonance ionization of sputtered atoms for the analysis of Os has shown that this technique has wide applications in geochemistry and will for the first time allow in situ studies of the distribution of PGE at the low concentration levels at which they occur in common minerals.
Resumo:
Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.
Resumo:
An approximate approach is presented for determining the stationary random response of a general multidegree-of-freedom nonlinear system under stationary Gaussian excitation. This approach relies on defining an equivalent linear system for the nonlinear system. Two particular systems which possess exact solutions have been solved by this approach, and it is concluded that this approach can generate reasonable solutions even for systems with fairly large nonlinearities. The approximate approach has also been applied to two examples for which no exact or approximate solutions were previously available.
Also presented is a matrix algebra approach for determining the stationary random response of a general multidegree-of-freedom linear system. Its derivation involves only matrix algebra and some properties of the instantaneous correlation matricies of a stationary process. It is therefore very direct and straightforward. The application of this matrix algebra approach is in general simpler than that of commonly used approaches.