21 resultados para Number representation format

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.

Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.

The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).

The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurons in the songbird forebrain nucleus HVc are highly sensitive to auditory temporal context and have some of the most complex auditory tuning properties yet discovered. HVc is crucial for learning, perceiving, and producing song, thus it is important to understand the neural circuitry and mechanisms that give rise to these remarkable auditory response properties. This thesis investigates these issues experimentally and computationally.

Extracellular studies reported here compare the auditory context sensitivity of neurons in HV c with neurons in the afferent areas of field L. These demonstrate that there is a substantial increase in the auditory temporal context sensitivity from the areas of field L to HVc. Whole-cell recordings of HVc neurons from acute brain slices are described which show that excitatory synaptic transmission between HVc neurons involve the release of glutamate and the activation of both AMPA/kainate and NMDA-type glutamate receptors. Additionally, widespread inhibitory interactions exist between HVc neurons that are mediated by postsynaptic GABA_A receptors. Intracellular recordings of HVc auditory neurons in vivo provides evidence that HV c neurons encode information about temporal structure using a variety of cellular and synaptic mechanisms including syllable-specific inhibition, excitatory post-synaptic potentials with a range of different time courses, and burst-firing, and song-specific hyperpolarization.

The final part of this thesis presents two computational approaches for representing and learning temporal structure. The first method utilizes comput ational elements that are analogous to temporal combination sensitive neurons in HVc. A network of these elements can learn using local information and lateral inhibition. The second method presents a more general framework which allows a network to discover mixtures of temporal features in a continuous stream of input.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.

The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.

Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.

Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.

An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.

Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.

The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a wonderful conjecture of Bloch and Kato that generalizes both the analytic Class Number Formula and the Birch and Swinnerton-Dyer conjecture. The conjecture itself was generalized by Fukaya and Kato to an equivariant formulation. In this thesis, I provide a new proof for the equivariant local Tamagawa number conjecture in the case of Tate motives for unramified fields, using Iwasawa theory and (φ,Γ)-modules, and provide some work towards extending the proof to tamely ramified fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electron diffraction investigation of the following compounds has been carried out: sulfur, sulfur nitride, realgar, arsenic trisulfide, spiropentane, dimethyltrisulfide, cis and trans lewisite, methylal, and ethylene glycol.

The crystal structures of the following salts have been determined by x-ray diffraction: silver molybdateand hydrazinium dichloride.

Suggested revisions of the covalent radii for B, Si, P, Ge, As, Sn, Sb, and Pb have been made, and values for the covalent radii of Al, Ga, In, Ti, and Bi have been proposed.

The Schomaker-Stevenson revision of the additivity rule for single covalent bond distances has been used in conjunction with the revised radii. Agreement with experiment is in general better with the revised radii than with the former radii and additivity.

The principle of ionic bond character in addition to that present in a normal covalent bond has been applied to the observed structures of numerous molecules. It leads to a method of interpretation which is at least as consistent as the theory of multiple bond formation.

The revision of the additivity rule has been extended to double bonds. An encouraging beginning along these lines has been made, but additional experimental data are needed for clarification.