6 resultados para statistical methods
em CaltechTHESIS
Resumo:
High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.
Resumo:
A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.
In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.
A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.
The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.
Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.
Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.
Resumo:
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.
We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.
We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.
We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.
Resumo:
A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.
In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.
A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.