9 resultados para first-passage time

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.

The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.

When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.

For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the chemistry that affects life on planet Earth occurs in the condensed phase. The TeraHertz (THz) or far-infrared (far-IR) region of the electromagnetic spectrum (from 0.1 THz to 10 THz, 3 cm-1 to 300 cm-1, or 3000 μm to 30 μm) has been shown to provide unique possibilities in the study of condensed-phase processes. The goal of this work is to expand the possibilities available in the THz region and undertake new investigations of fundamental interest to chemistry. Since we are fundamentally interested in condensed-phase processes, this thesis focuses on two areas where THz spectroscopy can provide new understanding: astrochemistry and solvation science. To advance these fields, we had to develop new instrumentation that would enable the experiments necessary to answer new questions in either astrochemistry or solvation science. We first developed a new experimental setup capable of studying astrochemical ice analogs in both the TeraHertz (THz), or far-Infrared (far-IR), region (0.3 - 7.5 THz; 10 - 250 cm-1) and the mid-IR (400 - 4000 cm-1). The importance of astrochemical ices lies in their key role in the formation of complex organic molecules, such as amino acids and sugars in space. Thus, the instruments are capable of performing variety of spectroscopic studies that can provide especially relevant laboratory data to support astronomical observations from telescopes such as the Herschel Space Telescope, the Stratospheric Observatory for Infrared Astronomy (SOFIA), and the Atacama Large Millimeter Array (ALMA). The experimental apparatus uses a THz time-domain spectrometer, with a 1750/875 nm plasma source and a GaP detector crystal, to cover the bandwidth mentioned above with ~10 GHz (~0.3 cm-1) resolution.

Using the above instrumentation, experimental spectra of astrochemical ice analogs of water and carbon dioxide in pure, mixed, and layered ices were collected at different temperatures under high vacuum conditions with the goal of investigating the structure of the ice. We tentatively observe a new feature in both amorphous solid water and crystalline water at 33 cm-1 (1 THz). In addition, our studies of mixed and layered ices show how it is possible to identify the location of carbon dioxide as it segregates within the ice by observing its effect on the THz spectrum of water ice. The THz spectra of mixed and layered ices are further analyzed by fitting their spectra features to those of pure amorphous solid water and crystalline water ice to quantify the effects of temperature changes on structure. From the results of this work, it appears that THz spectroscopy is potentially well suited to study thermal transformations within the ice.

To advance the study of liquids with THz spectroscopy, we developed a new ultrafast nonlinear THz spectroscopic technique: heterodyne-detected, ultrafast THz Kerr effect (TKE) spectroscopy. We implemented a heterodyne-detection scheme into a TKE spectrometer that uses a stilbazoiumbased THz emitter, 4-N,N-dimethylamino-4-N-methyl-stilbazolium 2,4,6-trimethylbenzenesulfonate (DSTMS), and high numerical aperture optics which generates THz electric field in excess of 300 kV/cm, in the sample. This allows us to report the first measurement of quantum beats at terahertz (THz) frequencies that result from vibrational coherences initiated by the nonlinear, dipolar interaction of a broadband, high-energy, (sub)picosecond THz pulse with the sample. Our instrument improves on both the frequency coverage, and sensitivity previously reported; it also ensures a backgroundless measurement of the THz Kerr effect in pure liquids. For liquid diiodomethane, we observe a quantum beat at 3.66 THz (122 cm-1), in exact agreement with the fundamental transition frequency of the υ4 vibration of the molecule. This result provides new insight into dipolar vs. Raman selection rules at terahertz frequencies.

To conclude we discuss future directions for the nonlinear THz spectroscopy in the Blake lab. We report the first results from an experiment using a plasma-based THz source for nonlinear spectroscopy that has the potential to enable nonlinear THz spectra with a sub-100 fs temporal resolution, and how the optics involved in the plasma mechanism can enable THz pulse shaping. Finally, we discuss how a single-shot THz detection scheme could improve the acquisition of THz data and how such a scheme could be implemented in the Blake lab. The instruments developed herein will hopefully remain a part of the groups core competencies and serve as building blocks for the next generation of THz instrumentation that pushes the frontiers of both chemistry and the scientific enterprise as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stability of a fluid having a non-uniform temperature stratification is examined analytically for the response of infinitesimal disturbances. The growth rates of disturbances have been established for a semi-infinite fluid for Rayleigh numbers of 103, 104, and 105 and for Prandtl numbers of 7.0 and 0.7.

The critical Rayleigh number for a semi-infinite fluid, based on the effective fluid depth, is found to be 32, while it is shown that for a finite fluid layer the critical Rayleigh number depends on the rate of heating. The minimum critical Rayleigh number, based on the depth of a fluid layer, is found to be 1340.

The stability of a finite fluid layer is examined for two special forms of heating. The first is constant flux heating, while in the second, the temperature of the lower surface is increased uniformly in time. In both cases, it is shown that for moderate rates of heating the critical Rayleigh number is reduced, over the value for very slow heating, while for very rapid heating the critical Rayleigh number is greatly increased. These results agree with published experimental observations.

The question of steady, non-cellular convection is given qualitative consideration. It is concluded that, although the motion may originate from infinitesimal disturbances during non-uniform heating, the final flow field is intrinsically non-linear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.

These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.

In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.

For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.

It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.