15 resultados para Binary Cyclically Permutable Constant Weight Codes

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The speciation of water in a variety of hydrous silicate glasses, including simple and rhyolitic compositions, synthesized over a range of experimental conditions with up to 11 weight percent water has been determined using infrared spectroscopy. This technique has been calibrated with a series of standard glasses and provides a precise and accurate method for determining the concentrations of molecular water and hydroxyl groups in these glasses.

For all the compositions studied, most of the water is dissolved as hydroxyl groups at total water contents less than 3-4 weight percent; at higher total water contents, molecular water becomes the dominant species. For total water contents above 3-4 weight percent, the amount of water dissolved as hydroxyl groups is approximately constant at about 2 weight percent and additional water is incorporated as molecular water. Although there are small but measurable differences in the ratio of molecular water to hydroxyl groups at a given total water content among these silicate glasses, the speciation of water is similar over this range of composition. The trends in the concentrations of the H-bearing species in the hydrous glasses included in this study are similar to those observed in other silicate glasses using either infrared or NMR spectroscopy.

The effects of pressure and temperature on the speciation of water in albitic glasses have been investigated. The ratio of molecular water to hydroxyl groups at a given total water content is independent of the pressure and temperature of equilibration for albitic glasses synthesized in rapidly quenching piston cylinder apparatus at temperatures greater than 1000°C and pressures greater than 8 kbar. For hydrous glasses quenched from melts cooled at slower rates (i.e., in internally heated or in air-quench cold seal pressure vessels), there is an increase in the ratio of molecular water to hydroxyl group content that probably reflects reequilibration of the melt to lower temperatures during slow cooling.

Molecular water and hydroxyl group concentrations in glasses provide information on the dissolution mechanisms of water in silicate liquids. Several mixing models involving homogeneous equilibria of the form H_2O + O = 20H among melt species have been explored for albitic melts. These models can account for the measured species concentrations if the effects of non-ideal behavior or mixing of polymerized units are included, or by allowing for the presence of several different types of anhydrous species.

A thermodynamic model for hydrous albitic melts has been developed based on the assumption that the activity of water in the melt is equal to the mole fraction of molecular water determined by infrared spectroscopy. This model can account for the position of the watersaturated solidus of crystalline albite, the pressure and temperature dependence of the solubility of water in albitic melt, and the volumes of hydrous albitic melts. To the extent that it is successful, this approach provides a direct link between measured species concentrations in hydrous albitic glasses and the macroscopic thermodynamic properties of the albite-water system.

The approach taken in modelling the thermodynamics of hydrous albitic melts has been generalized to other silicate compositions. Spectroscopic measurements of species concentrations in rhyolitic and simple silicate glasses quenched from melts equilibrated with water vapor provide important constraints on the thermodynamic properties of these melt-water systems. In particular, the assumption that the activity of water is equal to the mole fraction of molecular water has been tested in detail and shown to be a valid approximation for a range of hydrous silicate melts and the partial molar volume of water in these systems has been constrained. Thus, the results of this study provide a useful thermodynamic description of hydrous melts that can be readily applied to other melt-water systems for which spectroscopic measurements of the H-bearing species are available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three separate studies of roles that black holes might play in our universe.

In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.

In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.

In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LIGO and Virgo gravitational-wave observatories are complex and extremely sensitive strain detectors that can be used to search for a wide variety of gravitational waves from astrophysical and cosmological sources. In this thesis, I motivate the search for the gravitational wave signals from coalescing black hole binary systems with total mass between 25 and 100 solar masses. The mechanisms for formation of such systems are not well-understood, and we do not have many observational constraints on the parameters that guide the formation scenarios. Detection of gravitational waves from such systems — or, in the absence of detection, the tightening of upper limits on the rate of such coalescences — will provide valuable information that can inform the astrophysics of the formation of these systems. I review the search for these systems and place upper limits on the rate of black hole binary coalescences with total mass between 25 and 100 solar masses. I then show how the sensitivity of this search can be improved by up to 40% by the the application of the multivariate statistical classifier known as a random forest of bagged decision trees to more effectively discriminate between signal and non-Gaussian instrumental noise. I also discuss the use of this classifier in the search for the ringdown signal from the merger of two black holes with total mass between 50 and 450 solar masses and present upper limits. I also apply multivariate statistical classifiers to the problem of quantifying the non-Gaussianity of LIGO data. Despite these improvements, no gravitational-wave signals have been detected in LIGO data so far. However, the use of multivariate statistical classification can significantly improve the sensitivity of the Advanced LIGO detectors to such signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first experimental evidence that the heat capacity of superfluid 4He, at temperatures very close to the lambda transition temperature, Tλ,is enhanced by a constant heat flux, Q. The heat capacity at constant Q, CQ,is predicted to diverge at a temperature Tc(Q) < Tλ at which superflow becomes unstable. In agreement with previous measurements, we find that dissipation enters our cell at a temperature, TDAS(Q),below the theoretical value, Tc(Q). Our measurements of CQ were taken using the discrete pulse method at fourteen different heat flux values in the range 1µW/cm2 ≤ Q≤ 4µW /cm2. The excess heat capacity ∆CQ we measure has the predicted scaling behavior as a function of T and Q:∆CQ • tα ∝ (Q/Qc)2, where QcT) ~ t is the critical heat current that results from the inversion of the equation for Tc(Q). We find that if the theoretical value of Tc( Q) is correct, then ∆CQ is considerably larger than anticipated. On the other hand,if Tc(Q)≈ TDAS(Q),then ∆CQ is the same magnitude as the theoretically predicted enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A dilution refrigerator has been constructed capable of producing steady state temperatures less than .075°K. The first part of this work is concerned with the design and construction of this machine. Enough theory is presented to allow one to understand the operation and critical design factors of a dilution refrigerator. The performance of our refrigerator is compared with the operating characteristics of three other dilution refrigerators appearing in the present literature.

The dilution refrigerator constructed was used to measure the nuclear contribution to the low temperature specific heat of a pure, single-crystalline sample of rhenium metal. Measurements were made in magnetic fields from 0 to 12.5 kOe for the temperature range .13°K - .52°K. The second part of this work discusses the results of these experiments. The expected nuclear contribution is not found when the sample is in the superconducting state. This is believed to be due to the long spin-lattice relaxation times in superconductors. In the normal state, for the temperature range studied, the nuclear contribution is given by A/T2 where A = .061 ± .002 millijoules-K/mole. The value of A is found to increase to A = .077 ± .004 millijoules-K/mole when the sample is located in a magnetic field of 12.5 kOe.

From the measured value of A the splitting of the energy levels of the nuclear spin system due to the interaction of the internal crystalline electric field gradients with the nuclear quadrupole moments is calculated. A comparison is made between the predicted and measured magnetic dependence of the specific heat. Finally, predictions are made of future nuclear magnetic resonance experiments which may be performed to check the results obtained by calorimetery here and further, to investigate existing theories concerning the sources of electric field gradients in metals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cataphoretic purification of helium was investigated for binary mixtures of He with Ar, Ne, N2, O2, CO, and CO2 in DC glow discharge. An experimental technique was developed to continuously measure the composition in the anode end-bulb without sample withdrawal. Discharge currents ranged from 10 ma to 100 ma. Total gas pressure ranged from 2 torr to 9 torr. Initial compositions of the minority component in He ranged from 1.2 mole percent to 7.5 mole percent.

The cataphoretic separation of Ar and Ne from He was found to be in agreement with previous investigators. The cataphoretic separation of N2, O2, and CO from He was found to be similar to noble gas systems in that the steady-state separation improved with (1) increasing discharge current, (2) increasing gas pressure, and (3) decreasing initial composition of the minority component. In the He-CO2 mixture, the CO2 dissociated to CO plus O2. The fraction of CO2 dissociated was directly proportional to the current and pressure and independent of initial composition.

The experimental results for the separation of Ar, Ne, N2, O2, and CO from He were interpreted in the framework of a recently proposed theoretical model involving an electrostatic Peclet number. In the model the electric field was assumed to be constant. This assumption was checked experimentally and the maximum variation in electric field was 35% in time and 30% in position. Consequently, the assumption of constant electric field introduced no more than 55% variation in the electrostatic Peclet number during a separation.

To aid in the design of new cataphoretic systems, the following design criteria were developed and tested in detail: (1) electric field independent of discharge current, (2) electric field directly proportional to total pressure, (3) ion fraction of impurity directly proportional to discharge current, and (4) ion fraction of impurity independent of total pressure. Although these assumptions are approximate, they enabled the steady-state concentration profile to be predicted to within 25% for 75% of the data. The theoretical model was also tested with respect to the characteristic time associated with transient cataphoresis. Over 80% of the data was within a factor of two of the calculated characteristic times.

The electrostatic Peclet number ranged in value from 0.13 to 4.33. Back-calculated ion fractions of the impurity component ranged in value from 4.8x10-6 to 178x10-6.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is concerned with some of the properties of roll waves that develop naturally from a turbulent uniform flow in a wide rectangular channel on a constant steep slope . The wave properties considered were depth at the wave crest, depth at the wave trough, wave period, and wave velocity . The primary focus was on the mean values and standard deviations of the crest depths and wave periods at a given station and how these quantities varied with distance along the channel.

The wave properties were measured in a laboratory channel in which roll waves developed naturally from a uniform flow . The Froude number F (F = un/√ghn, un = normal velocity , hn = normal depth, g =acceleration of gravity) ranged from 3. 4 to 6. 0 for channel slopes So of . 05 and . 12 respectively . In the initial phase of their development the roll waves appeared as small amplitude waves with a continuous water surface profile . These small amplitude waves subsequently developed into large amplitude shock waves. Shock waves were found to overtake and combine with other shock waves with the result that the crest depth of the combined wave was larger than the crest depths before the overtake. Once roll waves began to develop, the mean value of the crest depths hnmax increased with distance . Once the shock waves began to overtake, the mean wave period Tav increased approximately linearly with distance.

For a given Froude number and channel slope the observed quantities h-max/hn , T' (T' = So Tav √g/hn), and the standard deviations of h-max/hn and T', could be expressed as unique functions of l/hn (l = distance from beginning of channel) for the two-fold change in hn occurring in the observed flows . A given value of h-max/hn occurred at smaller values of l/hn as the Froude number was increased. For a given value of h /hh-max/hn the growth rate of δh-max/h-maxδl of the shock waves increased as the Froude number was increased.

A laboratory channel was also used to measure the wave properties of periodic permanent roll waves. For a given Froude number and channel slope the h-max/hn vs. T' relation did not agree with a theory in which the weight of the shock front was neglected. After the theory was modified to include this weight, the observed values of h-max/hn were within an average of 6.5 percent of the predicted values, and the maximum discrepancy was 13.5 percent.

For h-max/hn sufficiently large (h-max/hn > approximately 1.5) it was found that the h-max/hn vs. T' relation for natural roll waves was practically identical to the h-max/hn vs. T' relation for periodic permanent roll waves at the same Froude number and slope. As a result of this correspondence between periodic and natural roll waves, the growth rate δh-max/h-maxδl of shock waves was predicted to depend on the channel slope, and this slope dependence was observed in the experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Several approximate Hartree-Fock SCF wavefunctions for the ground electronic state of the water molecule have been obtained using an increasing number of multicenter s, p, and d Slater-type atomic orbitals as basis sets. The predicted charge distribution has been extensively tested at each stage by calculating the electric dipole moment, molecular quadrupole moment, diamagnetic shielding, Hellmann-Feynman forces, and electric field gradients at both the hydrogen and the oxygen nuclei. It was found that a carefully optimized minimal basis set suffices to describe the electronic charge distribution adequately except in the vicinity of the oxygen nucleus. Our calculations indicate, for example, that the correct prediction of the field gradient at this nucleus requires a more flexible linear combination of p-orbitals centered on this nucleus than that in the minimal basis set. Theoretical values for the molecular octopole moment components are also reported.

Part II

The perturbation-variational theory of R. M. Pitzer for nuclear spin-spin coupling constants is applied to the HD molecule. The zero-order molecular orbital is described in terms of a single 1s Slater-type basis function centered on each nucleus. The first-order molecular orbital is expressed in terms of these two functions plus one singular basis function each of the types e-r/r and e-r ln r centered on one of the nuclei. The new kinds of molecular integrals were evaluated to high accuracy using numerical and analytical means. The value of the HD spin-spin coupling constant calculated with this near-minimal set of basis functions is JHD = +96.6 cps. This represents an improvement over the previous calculated value of +120 cps obtained without using the logarithmic basis function but is still considerably off in magnitude compared with the experimental measurement of JHD = +43 0 ± 0.5 cps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.

Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.

Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.