19 resultados para optimal composition

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents composition measurements for atmospherically relevant inorganic and organic aerosol from laboratory and ambient measurements using the Aerodyne aerosol mass spectrometer. Studies include the oxidation of dodecane in the Caltech environmental chambers, and several aircraft- and ground-based field studies, which include the quantification of wildfire emissions off the coast of California, and Los Angeles urban emissions.

The oxidation of dodecane by OH under low NO conditions and the formation of secondary organic aerosol (SOA) was explored using a gas-phase chemical model, gas-phase CIMS measurements, and high molecular weight ion traces from particle- phase HR-TOF-AMS mass spectra. The combination of these measurements support the hypothesis that particle-phase chemistry leading to peroxyhemiacetal formation is important. Positive matrix factorization (PMF) was applied to the AMS mass spectra which revealed three factors representing a combination of gas-particle partitioning, chemical conversion in the aerosol, and wall deposition.

Airborne measurements of biomass burning emissions from a chaparral fire on the central Californian coast were carried out in November 2009. Physical and chemical changes were reported for smoke ages 0 – 4 h old. CO2 normalized ammonium, nitrate, and sulfate increased, whereas the normalized OA decreased sharply in the first 1.5 - 2 h, and then slowly increased for the remaining 2 h (net decrease in normalized OA). Comparison to wildfire samples from the Yucatan revealed that factors such as relative humidity, incident UV radiation, age of smoke, and concentration of emissions are important for wildfire evolution.

Ground-based aerosol composition is reported for Pasadena, CA during the summer of 2009. The OA component, which dominated the submicron aerosol mass, was deconvolved into hydrocarbon-like organic aerosol (HOA), semi-volatile oxidized organic aerosol (SVOOA), and low-volatility oxidized organic aerosol (LVOOA). The HOA/OA was only 0.08–0.23, indicating that most of Pasadena OA in the summer months is dominated by oxidized OA resulting from transported emissions that have undergone photochemistry and/or moisture-influenced processing, as apposed to only primary organic aerosol emissions. Airborne measurements and model predictions of aerosol composition are reported for the 2010 CalNex field campaign.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ternary alloys of nickel-palladium-phosphorus and iron-palladium- phosphorus containing 20 atomic % phosphorus were rapidly quenched from the liquid state. The structure of the quenched alloys was investigated by X-ray diffraction. Broad maxima in the diffraction patterns, indicative of a glass-like structure, were obtained for 13 to 73 atomic % nickel and 13 to 44 atomic % iron, with palladium adding up to 80%.

Radial distribution functions were computed from the diffraction data and yielded average interatomic distances and coordination numbers. The structure of the amorphous alloys could be explained in terms of structural units analogous to those existing in the crystalline Pd3P, Ni3P and Fe3P phases, with iron or nickel substituting for palladium. A linear relationship between interatomic distances and composition, similar to Vegard's law, was shown for these metallic glasses.

Electrical resistivity measurements showed that the quenched alloys were metallic. Measurements were performed from liquid helium temperatures (4.2°K) up to the vicinity of the melting points (900°K- 1000°K). The temperature coefficient in the glassy state was very low, of the order of 10-4/°K. A resistivity minimum was found at low temperature, varying between 9°K and 14°K for Nix-Pd80-x -P20 and between 17°K and 96°K for Fex-Pd80-x -P20, indicating the presence of a Kondo effect. Resistivity measurements, with a constant heating rate of about 1.5°C/min,showed progressive crystallization above approximately 600°K.

The magnetic moments of the amorphous Fe-Pd-P alloys were measured as a function of magnetic field and temperature. True ferromagnetism was found for the alloys Fe32-Pd48-P20 and Fe44-Pd36-P20 with Curie points at 165° K and 380° K respectively. Extrapolated values of the saturation magnetic moments to 0° K were 1.70 µB and 2.10 µB respectively. The amorphous alloy Fe23-Pd57-P20 was assumed to be superparamagnetic. The experimental data indicate that phosphorus contributes to the decrease of moments by electron transfer, whereas palladium atoms probably have a small magnetic moment. A preliminary investigation of the Ni-Pd-P amorphous alloys showed that these alloys are weakly paramagnetic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis brings together four papers on optimal resource allocation under uncertainty with capacity constraints. The first is an extension of the Arrow-Debreu contingent claim model to a good subject to supply uncertainty for which delivery capacity has to be chosen before the uncertainty is resolved. The second compares an ex-ante contingent claims market to a dynamic market in which capacity is chosen ex-ante and output and consumption decisions are made ex-post. The third extends the analysis to a storable good subject to random supply. Finally, the fourth examines optimal allocation of water under an appropriative rights system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of composition on the structure and on the electric and magnetic properties of amorphous Pd-Mn-P and Pd-Co-P prepared by rapid quenching techniques were investigated in terms of (1) the 3d band filling of the first transition metal group, (2) the phosphorus concentration effect which acts as an electron donor and (3) the transition metal concentration.

The structure is essentially characterized by a set of polyhedra subunits essentially inverse to the packing of hard spheres in real space. Examination of computer generated distribution functions using Monte Carlo random statistical distribution of these polyhedra entities demonstrated tile reproducibility of the experimentally calculated atomic distribution function. As a result, several possible "structural parameters" are proposed such as: the number of nearest neighbors, the metal-to-metal distance, the degree of short-range order and the affinity between metal-metal and metal-metalloid. It is shown that the degree of disorder increases from Ni to Mn. Similar behavior is observed with increase in the phosphorus concentration.

The magnetic properties of Pd-Co-P alloys show that they are ferromagnetic with a Curie temperature between 272 and 399°K as the cobalt concentration increases from 15 to 50 at.%. Below 20 at.% Co the short-range exchange interactions which produce the ferromagnetism are unable to establish a long-range magnetic order and a peak in the magnetization shows up at the lowest temperature range . The electric resistivity measurements were performed from liquid helium temperatures up to the vicinity of the melting point (900°K). The thermomagnetic analysis was carried out under an applied field of 6.0 kOe. The electrical resistivity of Pd-Co-P shows the coexistence of a Kondo-like minimum with ferromagnetism. The minimum becomes less important as the transition metal concentration increases and the coefficients of ℓn T and T^2 become smaller and strongly temperature dependent. The negative magnetoresistivity is a strong indication of the existence of localized moment.

The temperature coefficient of resistivity which is positive for Pd- Fe-P, Pd-Ni-P, and Pd-Co-P becomes negative for Pd-Mn-P. It is possible to account for the negative temperature dependence by the localized spin fluctuation model and the high density of states at the Fermi energy which becomes maximum between Mn and Cr. The magnetization curves for Pd-Mn-P are typical of those resulting from the interplay of different exchange forces. The established relationship between susceptibility and resistivity confirms the localized spin fluctuation model. The magnetoresistivity of Pd-Mn-P could be interpreted in tenns of a short-range magnetic ordering that could arise from the Rudennan-Kittel type interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.

In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Low Energy Telescopes on the Voyager spacecraft are used to measure the elemental composition (2 ≤ Z ≤ 28) and energy spectra (5 to 15 MeV /nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events are selected which have SEP abundance ratios approximately independent of energy/nucleon. The abundances for these events are compared from flare to flare and are compared to solar abundances from other sources: spectroscopy of the photosphere and corona, and solar wind measurements.

The selected SEP composition results may be described by an average composition plus a systematic flare-to-flare deviation about the average. For each of the four events, the ratios of the SEP abundances to the four-flare average SEP abundances are approximately monotonic functions of nuclear charge Z in the range 6 ≤ Z ≤ 28. An exception to this Z-dependent trend occurs for He, whose abundance relative to Si is nearly the same in all four events.

The four-flare average SEP composition is significantly different from the solar composition determined by photospheric spectroscopy: The elements C, N and O are depleted in SEPs by a factor of about five relative to the elements Na, Mg, Al, Si, Ca, Cr, Fe and Ni. For some elemental abundance ratios (e.g. Mg/O), the difference between SEP and photospheric results is persistent from flare to flare and is apparently not due to a systematic difference in SEP energy/nucleon spectra between the elements, nor to propagation effects which would result in a time-dependent abundance ratio in individual flare events.

The four-flare average SEP composition is in agreement with solar wind abundance results and with a number of recent coronal abundance measurements. The evidence for a common depletion of oxygen in SEPs, the corona and the solar wind relative to the photosphere suggests that the SEPs originate in the corona and that both the SEPs and solar wind sample a coronal composition which is significantly and persistently different from that of the photosphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report measurements of isotope abundance ratios for 5-50 MeV/nuc nuclei from a large solar flare that occurred on September 23, 1978. The measurements were made by the Heavy Isotope Spectrometer Telescope (HIST) on the ISEE-3 satellite orbiting the Sun near an Earth-Sun libration point approximately one million miles sunward of the Earth. We report finite values for the isotope abundance ratios 13C/12C, 15N/14N, 18O/16O, 22Ne/ 20Ne, 25Mg/24Mg, and 26Mg/24Mg, and upper limits for the isotope abundance ratios 3He/4He, 14C/12C, 17O/16O, and 21Ne/20Ne.

We measured element abundances and spectra to compare the September 23, 1978 flare with other flares reported in the literature. The flare is a typical large flare with "low" Fe/O abundance (≤ 0.1).

For 13C/12C, 15N/14N, 18O/16O, 25Mg/ 24Mg, and 26Mg/24Mg, our measured isotope abundance ratios agree with the solar system abundance ratios of Cameron (1981). For neon we measure 22Ne/20Ne = 0.109 + 0.026 - 0.019, a value that is different with confidence 97.5% from the abundance measured in the solar wind by Geiss at al. (1972) of 22Ne/20Ne = 0.073 ± 0.001. Our measurement for 22Ne/20Ne agrees with the isotopic composition of the meteoritic component neon-A.

Separate arguments appear to rule out simple mass fractionation in the solar wind and in our solar energetic particle measurements as the cause of the discrepancy in the comparison of the apparent compositions of these two sources of solar material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The isotopic compositions of galactic cosmic ray boron, carbon, and nitrogen have been measured at energies near 300 MeV amu-1, using a balloon-borne instrument at an atmospheric depth of ~5 g cm-2. The calibrations of the detectors comprising the instrument are described. The saturation properties of the cesium iodide scintilla tors used for measurement of particle energy are studied in the context of analyzing the data for mass. The achieved rms mass resolution varies from ~ 0.3 amu at boron to ~ 0.5 amu at nitrogen, consistent with a theoretical analysis of the contributing factors. Corrected for detector interactions and the effects of the residual atmosphere, the results are ^(10)B/B = 0.33^(+0.17)_(-0.11), ^(13)C/C = 0.06^(+0.13)_(-0.01), and ^(15)N/N = 0.42 (+0.19)_(-0.17). A model of galactic propagation and solar modulation is described. Assuming a cosmic ray source composition of solar-like isotopic abundances, the model predicts abundances near earth consistent with the measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.