23 resultados para Expected First Passage Time

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.

The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.

When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.

For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are two competing models of our universe right now. One is Big Bang with inflation cosmology. The other is the cyclic model with ekpyrotic phase in each cycle. This paper is divided into two main parts according to these two models. In the first part, we quantify the potentially observable effects of a small violation of translational invariance during inflation, as characterized by the presence of a preferred point, line, or plane. We explore the imprint such a violation would leave on the cosmic microwave background anisotropy, and provide explicit formulas for the expected amplitudes $\langle a_{lm}a_{l'm'}^*\rangle$ of the spherical-harmonic coefficients. We then provide a model and study the two-point correlation of a massless scalar (the inflaton) when the stress tensor contains the energy density from an infinitely long straight cosmic string in addition to a cosmological constant. Finally, we discuss if inflation can reconcile with the Liouville's theorem as far as the fine-tuning problem is concerned. In the second part, we find several problems in the cyclic/ekpyrotic cosmology. First of all, quantum to classical transition would not happen during an ekpyrotic phase even for superhorizon modes, and therefore the fluctuations cannot be interpreted as classical. This implies the prediction of scale-free power spectrum in ekpyrotic/cyclic universe model requires more inspection. Secondly, we find that the usual mechanism to solve fine-tuning problems is not compatible with eternal universe which contains infinitely many cycles in both direction of time. Therefore, all fine-tuning problems including the flatness problem still asks for an explanation in any generic cyclic models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxies evolve throughout the history of the universe from the first star-forming sources, through gas-rich asymmetric structures with rapid star formation rates, to the massive symmetrical stellar systems observed at the present day. Determining the physical processes which drive galaxy formation and evolution is one of the most important questions in observational astrophysics. This thesis presents four projects aimed at improving our understanding of galaxy evolution from detailed measurements of star forming galaxies at high redshift.

We use resolved spectroscopy of gravitationally lensed z ≃ 2 - 3 star forming galaxies to measure their kinematic and star formation properties. The combination of lensing with adaptive optics yields physical resolution of ≃ 100 pc, sufficient to resolve giant Hii regions. We find that ~ 70 % of galaxies in our sample display ordered rotation with high local velocity dispersion indicating turbulent thick disks. The rotating galaxies are gravitationally unstable and are expected to fragment into giant clumps. The size and dynamical mass of giant Hii regions are in agreement with predictions for such clumps indicating that gravitational instability drives the rapid star formation. The remainder of our sample is comprised of ongoing major mergers. Merging galaxies display similar star formation rate, morphology, and local velocity dispersion as isolated sources, but their velocity fields are more chaotic with no coherent rotation.

We measure resolved metallicity in four lensed galaxies at z = 2.0 − 2.4 from optical emission line diagnostics. Three rotating galaxies display radial gradients with higher metallicity at smaller radii, while the fourth is undergoing a merger and has an inverted gradient with lower metallicity at the center. Strong gradients in the rotating galaxies indicate that they are growing inside-out with star formation fueled by accretion of metal-poor gas at large radii. By comparing measured gradients with an appropriate comparison sample at z = 0, we demonstrate that metallicity gradients in isolated galaxies must flatten at later times. The amount of size growth inferred by the gradients is in rough agreement with direct measurements of massive galaxies. We develop a chemical evolution model to interpret these data and conclude that metallicity gradients are established by a gradient in the outflow mass loading factor, combined with radial inflow of metal-enriched gas.

We present the first rest-frame optical spectroscopic survey of a large sample of low-luminosity galaxies at high redshift (L < L*, 1.5 < z < 3.5). This population dominates the star formation density of the universe at high redshifts, yet such galaxies are normally too faint to be studied spectroscopically. We take advantage of strong gravitational lensing magnification to compile observations for a sample of 29 galaxies using modest integration times with the Keck and Palomar telescopes. Balmer emission lines confirm that the sample has a median SFR ∼ 10 M_sun yr^−1 and extends to lower SFR than has been probed by other surveys at similar redshift. We derive the metallicity, dust extinction, SFR, ionization parameter, and dynamical mass from the spectroscopic data, providing the first accurate characterization of the star-forming environment in low-luminosity galaxies at high redshift. For the first time, we directly test the proposal that the relation between galaxy stellar mass, star formation rate, and gas phase metallicity does not evolve. We find lower gas phase metallicity in the high redshift galaxies than in local sources with equivalent stellar mass and star formation rate, arguing against a time-invariant relation. While our result is preliminary and may be biased by measurement errors, this represents an important first measurement that will be further constrained by ongoing analysis of the full data set and by future observations.

We present a study of composite rest-frame ultraviolet spectra of Lyman break galaxies at z = 4 and discuss implications for the distribution of neutral outflowing gas in the circumgalactic medium. In general we find similar spectroscopic trends to those found at z = 3 by earlier surveys. In particular, absorption lines which trace neutral gas are weaker in less evolved galaxies with lower stellar masses, smaller radii, lower luminosity, less dust, and stronger Lyα emission. Typical galaxies are thus expected to have stronger Lyα emission and weaker low-ionization absorption at earlier times, and we indeed find somewhat weaker low-ionization absorption at higher redshifts. In conjunction with earlier results, we argue that the reduced low-ionization absorption is likely caused by lower covering fraction and/or velocity range of outflowing neutral gas at earlier epochs. This result has important implications for the hypothesis that early galaxies were responsible for cosmic reionization. We additionally show that fine structure emission lines are sensitive to the spatial extent of neutral gas, and demonstrate that neutral gas is concentrated at smaller galactocentric radii in higher redshift galaxies.

The results of this thesis present a coherent picture of galaxy evolution at high redshifts 2 ≲ z ≲ 4. Roughly 1/3 of massive star forming galaxies at this period are undergoing major mergers, while the rest are growing inside-out with star formation occurring in gravitationally unstable thick disks. Star formation, stellar mass, and metallicity are limited by outflows which create a circumgalactic medium of metal-enriched material. We conclude by describing some remaining open questions and prospects for improving our understanding of galaxy evolution with future observations of gravitationally lensed galaxies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A time-domain spectrometer for use in the terahertz (THz) spectral range was designed and constructed. Due to there being few existing methods of generating and detecting THz radiation, the spectrometer is expected to have vast applications to solid, liquid, and gas phase samples. In particular, knowledge of complex organic chemistry and chemical abundances in the interstellar medium (ISM) can be obtained when compared to astronomical data. The THz spectral region is of particular interest due to reduced line density when compared to the millimeter wave spectrum, the existence of high resolution observatories, and potentially strong transitions resulting from the lowest-lying vibrational modes of large molecules.

The heart of the THz time-domain spectrometer (THz-TDS) is the ultrafast laser. Due to the femtosecond duration of ultrafast laser pulses and an energy-time uncertainty relationship, the pulses typically have a several-THz bandwidth. By various means of optical rectification, the optical pulse carrier envelope shape, i.e. intensity-time profile, can be transferred to the phase of the resulting THz pulse. As a consequence, optical pump-THz probe spectroscopy is readily achieved, as was demonstrated in studies of dye-sensitized TiO2, as discussed in chapter 4. Detection of the terahertz radiation is commonly based on electro-optic sampling and provides full phase information. This allows for accurate determination of both the real and imaginary index of refraction, the so-called optical constants, without additional analysis. A suite of amino acids and sugars, all of which have been found in meteorites, were studied in crystalline form embedded in a polyethylene matrix. As the temperature was varied between 10 and 310 K, various strong vibrational modes were found to shift in spectral intensity and frequency. Such modes can be attributed to intramolecular, intermolecular, or phonon modes, or to some combination of the three.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.

For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.

To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.

The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.

The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.

Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.

We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complementary techniques of low-energy, variable-angle electron-impact spectroscopy and ultraviolet variable-angle photoelectron spectroscopy have been used to study the electronic spectroscopy and structure of several series of molecules. Electron-impact studies were performed at incident beam energies between 25 eV and 100 eV and at scattering angles ranging from 0° to 90°. The energy-loss regions from 0 eV to greater than 15 eV were studied. Photoelectron spectroscopic studies were conducted using a HeI radiation source and spectra were measured at scattering angles from 45° to 90°. The molecules studied were chosen because of their spectroscopic, chemical, and structural interest. The operation of a new electron-impact spectrometer with multiple-mode target source capability is described. This spectrometer has been used to investigate the spin-forbidden transitions in a number of molecular systems.

The electron-impact spectroscopy of the six chloro-substituted ethylenes has been studied over the energy-loss region from 0-15 eV. Spin-forbidden excitations corresponding to the π → π*, N → T transition have been observed at excitation energies ranging from 4.13 eV in vinyl chloride to 3.54 eV in tetrachloroethylene. Symmetry-forbidden transitions of the type π → np have been oberved in trans-dichloroethyene and tetrachlor oethylene. In addition, transitions to many states lying above the first ionization potential were observed for the first time. Many of these bands have been assigned to Rydberg series converging to higher ionization potentials. The trends observed in the measured transition energies for the π → π*, N → T, and N → V as well as the π → 3s excitation are discussed and compared to those observed in the methyl- and fluoro- substituted ethylenes.

The electron energy-loss spectra of the group VIb transition metal hexacarbonyls have been studied in the 0 eV to 15 eV region. The differential cross sections were obtained for several features in the 3-7 eV energy-loss region. The symmetry-forbidden nature of the 1A1g1A1g, 2t2g(π) → 3t2g(π*) transition in these compounds was confirmed by the high-energy, low-angle behavior of their relative intensities. Several low lying transitions have been assigned to ligand field transitions on the basis of the energy and angular behavior of the differential cross sections for these transitions. No transitions which could clearly be assigned to singlet → triplet excitations involving metal orbitals were located. A number of states lying above the first ionization potential have been observed for the first time. A number of features in the 6-14 eV energy-loss region of the spectra of these compounds correspond quite well to those observed in free CO.

A number of exploratory studies have been performed. The π → π*, N → T, singlet → triplet excitation has been located in vinyl bromide at 4.05 eV. We have also observed this transition at approximately 3.8 eV in a cis-/trans- mixture of the 1,2-dibromoethylenes. The low-angle spectrum of iron pentacarbonyl was measured over the energy-loss region extending from 2-12 eV. A number of transitions of 8 eV or greater excitation energy were observed for the first time. Cyclopropane was also studied at both high and low angles but no clear evidence for any spin- forbidden transitions was found. The electron-impact spectrum of the methyl radical resulting from the pyrolysis of tetramethyl tin was obtained at 100 eV incident energy and at 0° scattering angle. Transitions observed at 5.70 eV and 8.30 eV agree well with the previous optical results. In addition, a number of bands were observed in the 8-14 eV region which are most likely due to Rydberg transitions converging to the higher ionization potentials of this molecule. This is the first reported electron-impact spectrum of a polyatomic free radical.

Variable-angle photoelectron spectroscopic studies were performed on a series of three-membered-ring heterocyclic compounds. These compounds are of great interest due to their highly unusual structure. Photoelectron angular distributions using HeI radiation have been measured for the first time for ethylene oxide and ethyleneimine. The measured anisotropy parameters, β, along with those measured for cyclopropane were used to confirm the orbital correlations and photoelectron band assignments. No high values of β similar to those expected for alkene π orbitals were observed for the Walsh or Forster-Coulson-Moffit type orbitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.

Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.

In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.

In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.

The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.

In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.

In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.