5 resultados para Quantification limit
em CaltechTHESIS
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
Experimental studies were conducted with the goals of 1) determining the origin of Pt- group element (PGE) alloys and associated mineral assemblages in refractory inclusions from meteorites and 2) developing a new ultrasensitive method for the in situ chemical and isotopic analysis of PGE. A general review of the geochemistry and cosmochemistry of the PGE is given, and specific research contributions are presented within the context of this broad framework.
An important step toward understanding the cosmochemistry of the PGE is the determination of the origin of POE-rich metallic phases (most commonly εRu-Fe) that are found in Ca, AJ-rich refractory inclusions (CAI) in C3V meteorites. These metals occur along with γNi-Fe metals, Ni-Fe sulfides and Fe oxides in multiphase opaque assemblages. Laboratory experiments were used to show that the mineral assemblages and textures observed in opaque assemblages could be produced by sulfidation and oxidation of once homogeneous Ni-Fe-PGE metals. Phase equilibria, partitioning and diffusion kinetics were studied in the Ni-Fe-Ru system in order to quantify the conditions of opaque assemblage formation. Phase boundaries and tie lines in the Ni-Fe-Ru system were determined at 1273, 1073 and 873K using an experimental technique that allowed the investigation of a large portion of the Ni-Fe-Ru system with a single experiment at each temperature by establishing a concentration gradient within which local equilibrium between coexisting phases was maintained. A wide miscibility gap was found to be present at each temperature, separating a hexagonal close-packed εRu-Fe phase from a face-centered cubic γNi-Fe phase. Phase equilibria determined here for the Ni-Fe-Ru system, and phase equilibria from the literature for the Ni-Fe-S and Ni-Fe-O systems, were compared with analyses of minerals from opaque assemblages to estimate the temperature and chemical conditions of opaque assemblage formation. It was determined that opaque assemblages equilibrated at a temperature of ~770K, a sulfur fugacity 10 times higher than an equilibrium solar gas, and an oxygen fugacity 106 times higher than an equilibrium solar gas.
Diffusion rates between -γNi-Fe and εRu-Fe metal play a critical role in determining the time (with respect to CAI petrogenesis) and duration of the opaque assemblage equilibration process. The diffusion coefficient for Ru in Ni (DRuNi) was determined as an analog for the Ni-Fe-Ru system by the thin-film diffusion method in the temperature range of 1073 to 1673K and is given by the expression:
DRuNi (cm2 sec-1) = 5.0(±0.7) x 10-3 exp(-2.3(±0.1) x 1012 erg mole-1/RT) where R is the gas constant and T is the temperature in K. Based on the rates of dissolution and exsolution of metallic phases in the Ni-Fe-Ru system it is suggested that opaque assemblages equilibrated after the melting and crystallization of host CAI during a metamorphic event of ≥ 103 years duration. It is inferred that opaque assemblages originated as immiscible metallic liquid droplets in the CAI silicate liquid. The bulk compositions of PGE in these precursor alloys reflects an early stage of condensation from the solar nebula and the partitioning of V between the precursor alloys and CAI silicate liquid reflects the reducing nebular conditions under which CAI were melted. The individual mineral phases now observed in opaque assemblages do not preserve an independent history prior to CAI melting and crystallization, but instead provide important information on the post-accretionary history of C3V meteorites and allow the quantification of the temperature, sulfur fugacity and oxygen fugacity of cooling planetary environments. This contrasts with previous models that called upon the formation of opaque assemblages by aggregation of phases that formed independently under highly variable conditions in the solar nebula prior to the crystallization of CAI.
Analytical studies were carried out on PGE-rich phases from meteorites and the products of synthetic experiments using traditional electron microprobe x-ray analytical techniques. The concentrations of PGE in common minerals from meteorites and terrestrial rocks are far below the ~100 ppm detection limit of the electron microprobe. This has limited the scope of analytical studies to the very few cases where PGE are unusually enriched. To study the distribution of PGE in common minerals will require an in situ analytical technique with much lower detection limits than any methods currently in use. To overcome this limitation, resonance ionization of sputtered atoms was investigated for use as an ultrasensitive in situ analytical technique for the analysis of PGE. The mass spectrometric analysis of Os and Re was investigated using a pulsed primary Ar+ ion beam to provide sputtered atoms for resonance ionization mass spectrometry. An ionization scheme for Os that utilizes three resonant energy levels (including an autoionizing energy level) was investigated and found to have superior sensitivity and selectivity compared to nonresonant and one and two energy level resonant ionization schemes. An elemental selectivity for Os over Re of ≥ 103 was demonstrated. It was found that detuning the ionizing laser from the autoionizing energy level to an arbitrary region in the ionization continuum resulted in a five-fold decrease in signal intensity and a ten-fold decrease in elemental selectivity. Osmium concentrations in synthetic metals and iron meteorites were measured to demonstrate the analytical capabilities of the technique. A linear correlation between Os+ signal intensity and the known Os concentration was observed over a range of nearly 104 in Os concentration with an accuracy of ~ ±10%, a millimum detection limit of 7 parts per billion atomic, and a useful yield of 1%. Resonance ionization of sputtered atoms samples the dominant neutral-fraction of sputtered atoms and utilizes multiphoton resonance ionization to achieve high sensitivity and to eliminate atomic and molecular interferences. Matrix effects should be small compared to secondary ion mass spectrometry because ionization occurs in the gas phase and is largely independent of the physical properties of the matrix material. Resonance ionization of sputtered atoms can be applied to in situ chemical analysis of most high ionization potential elements (including all of the PGE) in a wide range of natural and synthetic materials. The high useful yield and elemental selectivity of this method should eventually allow the in situ measurement of Os isotope ratios in some natural samples and in sample extracts enriched in PGE by fire assay fusion.
Phase equilibria and diffusion experiments have provided the basis for a reinterpretation of the origin of opaque assemblages in CAI and have yielded quantitative information on conditions in the primitive solar nebula and cooling planetary environments. Development of the method of resonance ionization of sputtered atoms for the analysis of Os has shown that this technique has wide applications in geochemistry and will for the first time allow in situ studies of the distribution of PGE at the low concentration levels at which they occur in common minerals.
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
Let PK, L(N) be the number of unordered partitions of a positive integer N into K or fewer positive integer parts, each part not exceeding L. A distribution of the form
Ʃ/N≤x PK,L(N)
is considered first. For any fixed K, this distribution approaches a piecewise polynomial function as L increases to infinity. As both K and L approach infinity, this distribution is asymptotically normal. These results are proved by studying the convergence of the characteristic function.
The main result is the asymptotic behavior of PK,K(N) itself, for certain large K and N. This is obtained by studying a contour integral of the generating function taken along the unit circle. The bulk of the estimate comes from integrating along a small arc near the point 1. Diophantine approximation is used to show that the integral along the rest of the circle is much smaller.
Resumo:
Computational imaging is flourishing thanks to the recent advancement in array photodetectors and image processing algorithms. This thesis presents Fourier ptychography, which is a computational imaging technique implemented in microscopy to break the limit of conventional optics. With the implementation of Fourier ptychography, the resolution of the imaging system can surpass the diffraction limit of the objective lens's numerical aperture; the quantitative phase information of a sample can be reconstructed from intensity-only measurements; and the aberration of a microscope system can be characterized and computationally corrected. This computational microscopy technique enhances the performance of conventional optical systems and expands the scope of their applications.