13 resultados para Probabilities

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.

Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.

Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.

Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial probabilities of activated, dissociative chemisorption of methane and ethane on Pt(110)-(1 x 2) have been measured. The surface temperature was varied from 450 to 900 K with the reactant gas temperature constant at 300 K. Under these conditions, we probe the kinetics of dissociation via trapping-mediated (as opposed to 'direct') mechanism. It was found that the probabilities of dissociation of both methane and ethane were strong functions of the surface temperature with an apparent activation energies of 14.4 kcal/mol for methane and 2.8 kcal/mol for ethane, which implys that the methane and ethane molecules have fully accommodated to the surface temperature. Kinetic isotope effects were observed for both reactions, indicating that the C-H bond cleavage was involved in the rate-limiting step. A mechanistic model based on the trapping-mediated mechanism is used to explain the observed kinetic behavior. The activation energies for C-H bond dissociation of the thermally accommodated methane and ethane on the surface extracted from the model are 18.4 and 10.3 kcal/mol, respectively.

The studies of the catalytic decomposition of formic acid on the Ru(001) surface with thermal desorption mass spectrometry following the adsorption of DCOOH and HCOOH on the surface at 130 and 310 K are described. Formic acid (DCOOH) chemisorbs dissociatively on the surface via both the cleavage of its O-H bond to form a formate and a hydrogen adatom, and the cleavage of its C-O bond to form a carbon monoxide, a deuterium adatom and an hydroxyl (OH). The former is the predominant reaction. The rate of desorption of carbon dioxide is a direct measure of the kinetics of decomposition of the surface formate. It is characterized by a kinetic isotope effect, an increasingly narrow FWHM, and an upward shift in peak temperature with Ɵ_T, the coverage of the dissociatively adsorbed formic acid. The FWHM and the peak temperature change from 18 K and 326 K at Ɵ_T = 0.04 to 8 K and 395 K at Ɵ_T = 0.89. The increase in the apparent activation energy of the C-D bond cleavage is largely a result of self-poisoning by the formate, the presence of which on the surface alters the electronic properties of the surface such that the activation energy of the decomposition of formate is increased. The variation of the activation energy for carbon dioxide formation with Ɵ_T accounts for the observed sharp carbon dioxide peak. The coverage of surface formate can be adjusted over a relatively wide range so that the activation energy for C-D bond cleavage in the case of DCOOH can be adjusted to be below, approximately equal to, or well above the activation energy for the recombinative desorption of the deuterium adatoms. Accordingly, the desorption of deuterium was observed to be governed completely by the desorption kinetics of the deuterium adatoms at low Ɵ_T, jointly by the kinetics of deuterium desorption and C-D bond cleavage at intermediate Ɵ_T, and solely by the kinetics of C-D bond cleavage at high Ɵ_T. The overall branching ratio of the formate to carbon dioxide and carbon monoxide is approximately unity, regardless the initial coverage Ɵ_T, even though the activation energy for the production of carbon dioxide varies with Ɵ_T. The desorption of water, which implies C-O bond cleavage of the formate, appears at approximately the same temperature as that of carbon dioxide. These observations suggest that the cleavage of the C-D bond and that of the C-O bond of two surface formates are coupled, possibly via the formation of a short-lived surface complex that is the precursor to to the decomposition.

The measurement of steady-state rate is demonstrated here to be valuable in determining kinetics associated with short-lived, molecularly adsorbed precursor to further reactions on the surface, by determining the kinetic parameters of the molecular precursor of formaldehyde to its dissociation on the Pt(110)-(1 x 2) surface.

Overlayers of nitrogen adatoms on Ru(001) have been characterized both by thermal desorption mass spectrometry and low-energy electron diffraction, as well as chemically via the postadsorption and desorption of ammonia and carbon monoxide.

The nitrogen-adatom overlayer was prepared by decomposing ammonia thermally on the surface at a pressure of 2.8 x 10^(-6) Torr and a temperature of 480 K. The saturated overlayer prepared under these conditions has associated with it a (√247/10 x √247/10)R22.7° LEED pattern, has two peaks in its thermal desorption spectrum, and has a fractional surface coverage of 0.40. Annealing the overlayer to approximately 535 K results in a rather sharp (√3 x √3)R30° LEED pattern with an associated fractional surface coverage of one-third. Annealing the overlayer further to 620 K results in the disappearance of the low-temperature thermal desorption peak and the appearance of a rather fuzzy p(2x2) LEED pattern with an associated fractional surface coverage of approximately one-fourth. In the low coverage limit, the presence of the (√3 x √3)R30° N overlayer alters the surface in such a way that the binding energy of ammonia is increased by 20% relative to the clean surface, whereas that of carbon monoxide is reduced by 15%.

A general methodology for the indirect relative determination of the absolute fractional surface coverages has been developed and was utilized to determine the saturation fractional coverage of hydrogen on Ru(001). Formaldehyde was employed as a bridge to lead us from the known reference point of the saturation fractional coverage of carbon monoxide to unknown reference point of the fractional coverage of hydrogen on Ru(001), which is then used to determine accurately the saturation fractional coverage of hydrogen. We find that ƟSAT/H = 1.02 (±0.05), i.e., the surface stoichiometry is Ru : H = 1 : 1. The relative nature of the method, which cancels systematic errors, together with the utilization of a glass envelope around the mass spectrometer, which reduces spurious contributions in the thermal desorption spectra, results in high accuracy in the determination of absolute fractional coverages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let {Ƶn}n = -∞ be a stochastic process with state space S1 = {0, 1, …, D – 1}. Such a process is called a chain of infinite order. The transitions of the chain are described by the functions

Qi(i(0)) = Ƥ(Ƶn = i | Ƶn - 1 = i (0)1, Ƶn - 2 = i (0)2, …) (i ɛ S1), where i(0) = (i(0)1, i(0)2, …) ranges over infinite sequences from S1. If i(n) = (i(n)1, i(n)2, …) for n = 1, 2,…, then i(n) → i(0) means that for each k, i(n)k = i(0)k for all n sufficiently large.

Given functions Qi(i(0)) such that

(i) 0 ≤ Qi(i(0) ≤ ξ ˂ 1

(ii)D – 1/Ʃ/i = 0 Qi(i(0)) Ξ 1

(iii) Qi(i(n)) → Qi(i(0)) whenever i(n) → i(0),

we prove the existence of a stationary chain of infinite order {Ƶn} whose transitions are given by

Ƥ (Ƶn = i | Ƶn - 1, Ƶn - 2, …) = Qin - 1, Ƶn - 2, …)

With probability 1. The method also yields stationary chains {Ƶn} for which (iii) does not hold but whose transition probabilities are, in a sense, “locally Markovian.” These and similar results extend a paper by T.E. Harris [Pac. J. Math., 5 (1955), 707-724].

Included is a new proof of the existence and uniqueness of a stationary absolute distribution for an Nth order Markov chain in which all transitions are possible. This proof allows us to achieve our main results without the use of limit theorem techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I

Solutions of Schrödinger’s equation for system of two particles bound in various stationary one-dimensional potential wells and repelling each other with a Coulomb force are obtained by the method of finite differences. The general properties of such systems are worked out in detail for the case of two electrons in an infinite square well. For small well widths (1-10 a.u.) the energy levels lie above those of the noninteresting particle model by as much as a factor of 4, although excitation energies are only half again as great. The analytical form of the solutions is obtained and it is shown that every eigenstate is doubly degenerate due to the “pathological” nature of the one-dimensional Coulomb potential. This degeneracy is verified numerically by the finite-difference method. The properties of the square-well system are compared with those of the free-electron and hard-sphere models; perturbation and variational treatments are also carried out using the hard-sphere Hamiltonian as a zeroth-order approximation. The lowest several finite-difference eigenvalues converge from below with decreasing mesh size to energies below those of the “best” linear variational function consisting of hard-sphere eigenfunctions. The finite-difference solutions in general yield expectation values and matrix elements as accurate as those obtained using the “best” variational function.

The system of two electrons in a parabolic well is also treated by finite differences. In this system it is possible to separate the center-of-mass motion and hence to effect a considerable numerical simplification. It is shown that the pathological one-dimensional Coulomb potential gives rise to doubly degenerate eigenstates for the parabolic well in exactly the same manner as for the infinite square well.

Part II

A general method of treating inelastic collisions quantum mechanically is developed and applied to several one-dimensional models. The formalism is first developed for nonreactive “vibrational” excitations of a bound system by an incident free particle. It is then extended to treat simple exchange reactions of the form A + BC →AB + C. The method consists essentially of finding a set of linearly independent solutions of the Schrödinger equation such that each solution of the set satisfies a distinct, yet arbitrary boundary condition specified in the asymptotic region. These linearly independent solutions are then combined to form a total scattering wavefunction having the correct asymptotic form. The method of finite differences is used to determine the linearly independent functions.

The theory is applied to the impulsive collision of a free particle with a particle bound in (1) an infinite square well and (2) a parabolic well. Calculated transition probabilities agree well with previously obtained values.

Several models for the exchange reaction involving three identical particles are also treated: (1) infinite-square-well potential surface, in which all three particles interact as hard spheres and each two-particle subsystem (i.e. BC and AB) is bound by an attractive infinite-square-well potential; (2) truncated parabolic potential surface, in which the two-particle subsystems are bound by a harmonic oscillator potential which becomes infinite for interparticle separations greater than a certain value; (3) parabolic (untruncated) surface. Although there are no published values with which to compare our reaction probabilities, several independent checks on internal consistency indicate that the results are reliable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two general, numerically exact, quantum mechanical methods have been developed for the calculation of energy transfer in molecular collisions. The methods do not treat electronic transitions because of the exchange symmetry of the electrons. All interactions between the atoms in the system are written as potential energies.

The first method is a matrix generalization of the invariant imbedding procedure, 17, 20 adapted for multi-channel collision processes. The second method is based on a direct integration of the matrix Schrödinger equation, with a re-orthogonalization transform applied during the integration.

Both methods have been applied to a collinear collision model for two diatoms, interacting via a repulsive exponential potential. Two major studies were performed. The first was to determine the energy dependence of the transition probabilities for an H2 on the H2 model system. Transitions are possible between translational energy and vibrational energy, and from vibrational modes of one H2 to the other H2. The second study was to determine the variation of vibrational energy transfer probability with differences in natural frequency of two diatoms similar to N2.

Comparisons were made to previous approximate analytical solutions of this same problem. For translational to vibrational energy transfer, the previous approximations were not adequate. For vibrational to vibrational energy transfer of one vibrational quantum, the approximations were quite good.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of the study, an RF coupled, atmospheric pressure, laminar plasma jet of argon was investigated for thermodynamic equilibrium and some rate processes.

Improved values of transition probabilities for 17 lines of argon I were developed from known values for 7 lines. The effect of inhomogeneity of the source was pointed out.

The temperatures, T, and the electron densities, ne , were determined spectroscopically from the population densities of the higher excited states assuming the Saha-Boltzmann relationship to be valid for these states. The axial velocities, vz, were measured by tracing the paths of particles of boron nitride using a three-dimentional mapping technique. The above quantities varied in the following ranges: 1012 ˂ ne ˂ 1015 particles/cm3, 3500 ˂ T ˂ 11000 °K, and 200 ˂ vz ˂ 1200 cm/sec.

The absence of excitation equilibrium for the lower excitation population including the ground state under certain conditions of T and ne was established and the departure from equilibrium was examined quantitatively. The ground state was shown to be highly underpopulated for the decaying plasma.

Rates of recombination between electrons and ions were obtained by solving the steady-state equation of continuity for electrons. The observed rates were consistent with a dissociative-molecular ion mechanism with a steady-state assumption for the molecular ions.

In the second part of the study, decomposition of NO was studied in the plasma at lower temperatures. The mole fractions of NO denoted by xNO were determined gas-chromatographically and varied between 0.0012 ˂ xNO ˂ 0.0055. The temperatures were measured pyrometrically and varied between 1300 ˂ T ˂ 1750°K. The observed rates of decomposition were orders of magnitude greater than those obtained by the previous workers under purely thermal reaction conditions. The overall activation energy was about 9 kcal/g mol which was considerably lower than the value under thermal conditions. The effect of excess nitrogen was to reduce the rate of decomposition of NO and to increase the order of the reaction with respect to NO from 1.33 to 1.85. The observed rates were consistent with a chain mechanism in which atomic nitrogen and oxygen act as chain carriers. The increased rates of decomposition and the reduced activation energy in the presence of the plasma could be explained on the basis of the observed large amount of atomic nitrogen which was probably formed as the result of reactions between excited atoms and ions of argon and the molecular nitrogen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Everett interpretation of quantum mechanics is an increasingly popular alternative to the traditional Copenhagen interpretation, but there are a few major issues that prevent the widespread adoption. One of these issues is the origin of probabilities in the Everett interpretation, which this thesis will attempt to survey. The most successful resolution of the probability problem thus far is the decision-theoretic program, which attempts to frame probabilities as outcomes of rational decision making. This marks a departure from orthodox interpretations of probabilities in the physical sciences, where probabilities are thought to be objective, stemming from symmetry considerations. This thesis will attempt to offer evaluations on the decision-theoretic program.