15 resultados para Exponential e logarithmic quaternion functions

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.

We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.

The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield

ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV

The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.

A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives

ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV

Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.

First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.

Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.

Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.

The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.

Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.

Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.

An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I of this thesis deals with 3 topics concerning the luminescence from bound multi-exciton complexes in Si. Part II presents a model for the decay of electron-hole droplets in pure and doped Ge.

Part I.

We present high resolution photoluminescence data for Si doped With Al, Ga, and In. We observe emission lines due to recombination of electron-hole pairs in bound excitons and satellite lines which have been interpreted in terms of complexes of several excitons bound to an impurity. The bound exciton luminescence in Si:Ga and Si:Al consists of three emission lines due to transitions from the ground state and two low lying excited states. In Si:Ga, we observe a second triplet of emission lines which precisely mirror the triplet due to the bound exciton. This second triplet is interpreted as due to decay of a two exciton complex into the bound exciton. The observation of the second complete triplet in Si:Ga conclusively demonstrates that more than one exciton will bind to an impurity. Similar results are found for Si:Al. The energy of the lines show that the second exciton is less tightly bound than the first in Si:Ga. Other lines are observed at lower energies. The assumption of ground state to ground-state transitions for the lower energy lines is shown to produce a complicated dependence of binding energy of the last exciton on the number of excitons in a complex. No line attributable to the decay of a two exciton complex is observed in Si:In.

We present measurements of the bound exciton lifetimes for the four common acceptors in Si and for the first two bound multi-exciton complexes in Si:Ga and Si:Al. These results are shown to be in agreement with a calculation by Osbourn and Smith of Auger transition rates for acceptor bound excitons in Si. Kinetics determine the relative populations of complexes of various sizes and work functions, at temperatures which do not allow them to thermalize with respect to one another. It is shown that kinetic limitations may make it impossible to form two-exciton complexes in Si:In from a gas of free excitons.

We present direct thermodynamic measurements of the work functions of bound multi-exciton complexes in Al, B, P and Li doped Si. We find that in general the work functions are smaller than previously believed. These data remove one obstacle to the bound multi-exciton complex picture which has been the need to explain the very large apparent work functions for the larger complexes obtained by assuming that some of the observed lines are ground-state to ground-state transitions. None of the measured work functions exceed that of the electron-hole liquid.

Part II.

A new model for the decay of electron-hole-droplets in Ge is presented. The model is based on the existence of a cloud of droplets within the crystal and incorporates exciton flow among the drops in the cloud and the diffusion of excitons away from the cloud. It is able to fit the experimental luminescence decays for pure Ge at different temperatures and pump powers while retaining physically reasonable parameters for the drops. It predicts the shrinkage of the cloud at higher temperatures which has been verified by spatially and temporally resolved infrared absorption experiments. The model also accounts for the nearly exponential decay of electron-hole-droplets in lightly doped Ge at higher temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a logarithmic potential theory on Riemann surfaces which generalizes logarithmic potential theory on the complex plane. We show the existence of an equilibrium measure and examine its structure. This leads to a formula for the structure of the equilibrium measure which is new even in the plane. We then use our results to study quadrature domains, Laplacian growth, and Coulomb gas ensembles on Riemann surfaces. We prove that the complement of the support of the equilibrium measure satisfies a quadrature identity. Furthermore, our setup allows us to naturally realize weak solutions of Laplacian growth (for a general time-dependent source) as an evolution of the support of equilibrium measures. When applied to the Riemann sphere this approach unifies the known methods for generating interior and exterior Laplacian growth. We later narrow our focus to a special class of quadrature domains which we call Algebraic Quadrature Domains. We show that many of the properties of quadrature domains generalize to this setting. In particular, the boundary of an Algebraic Quadrature Domain is the inverse image of a planar algebraic curve under a meromorphic function. This makes the study of the topology of Algebraic Quadrature Domains an interesting problem. We briefly investigate this problem and then narrow our focus to the study of the topology of classical quadrature domains. We extend the results of Lee and Makarov and prove (for n ≥ 3) c ≤ 5n-5, where c and n denote the connectivity and degree of a (classical) quadrature domain. At the same time we obtain a new upper bound on the number of isolated points of the algebraic curve corresponding to the boundary and thus a new upper bound on the number of special points. In the final chapter we study Coulomb gas ensembles on Riemann surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several patients of P. J. Vogel who had undergone cerebral commissurotomy for the control of intractable epilepsy were tested on a variety of tasks to measure aspects of cerebral organization concerned with lateralization in hemispheric function. From tests involving identification of shapes it was inferred that in the absence of the neocortical commissures, the left hemisphere still has access to certain types of information from the ipsilateral field. The major hemisphere can still make crude differentiations between various left-field stimuli, but is unable to specify exact stimulus properties. Most of the time the major hemisphere, having access to some ipsilateral stimuli, dominated the minor hemisphere in control of the body.

Competition for control of the body between the hemispheres is seen most clearly in tests of minor hemisphere language competency, in which it was determined that though the minor hemisphere does possess some minimal ability to express language, the major hemisphere prevented its expression much of the time. The right hemisphere was superior to the left in tests of perceptual visualization, and the two hemispheres appeared to use different strategies in attempting to solve the problems, namely, analysis for the left hemisphere and synthesis for the right hemisphere.

Analysis of the patients' verbal and performance I.Q.'s, as well as observations made throughout testing, suggest that the corpus callosum plays a critical role in activities that involve functions in which the minor hemisphere normally excels, that the motor expression of these functions may normally come through the major hemisphere by way of the corpus callosum.

Lateral specialization is thought to be an evolutionary adaptation which overcame problems of a functional antagonism between the abilities normally associated with the two hemispheres. The tests of perception suggested that this function lateralized into the mute hemisphere because of an active counteraction by language. This latter idea was confirmed by the finding that left-handers, in whom there is likely to be bilateral language centers, are greatly deficient on tests of perception.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A locally integrable function is said to be of vanishing mean oscillation (VMO) if its mean oscillation over cubes in Rd converges to zero with the volume of the cubes. We establish necessary and sufficient conditions for a locally integrable function defined on a bounded measurable set of positive measure to be the restriction to that set of a VMO function.

We consider the similar extension problem pertaining to BMO(ρ) functions; that is, those VMO functions whose mean oscillation over any cube is O(ρ(l(Q))) where l(Q) is the length of Q and ρ is a positive, non-decreasing function with ρ(0+) = 0.

We apply these results to obtain sufficient conditions for a Blaschke sequence to be the zeros of an analytic BMO(ρ) function on the unit disc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let E be a compact subset of the n-dimensional unit cube, 1n, and let C be a collection of convex bodies, all of positive n-dimensional Lebesgue measure, such that C contains bodies with arbitrarily small measure. The dimension of E with respect to the covering class C is defined to be the number

dC(E) = sup(β:Hβ, C(E) > 0),

where Hβ, C is the outer measure

inf(Ʃm(Ci)β:UCi E, Ci ϵ C) .

Only the one and two-dimensional cases are studied. Moreover, the covering classes considered are those consisting of intervals and rectangles, parallel to the coordinate axes, and those closed under translations. A covering class is identified with a set of points in the left-open portion, 1’n, of 1n, whose closure intersects 1n - 1’n. For n = 2, the outer measure Hβ, C is adopted in place of the usual:

Inf(Ʃ(diam. (Ci))β: UCi E, Ci ϵ C),

for the purpose of studying the influence of the shape of the covering sets on the dimension dC(E).

If E is a closed set in 11, let M(E) be the class of all non-decreasing functions μ(x), supported on E with μ(x) = 0, x ≤ 0 and μ(x) = 1, x ≥ 1. Define for each μ ϵ M(E),

dC(μ) = lim/c → inf/0 log ∆μ(c)/log c , (c ϵ C)

where ∆μ(c) = v/x (μ(x+c) – μ(x)). It is shown that

dC(E) = sup (dC(μ):μ ϵ M(E)).

This notion of dimension is extended to a certain class Ӻ of sub-additive functions, and the problem of studying the behavior of dC(E) as a function of the covering class C is reduced to the study of dC(f) where f ϵ Ӻ. Specifically, the set of points in 11,

(*) {dB(F), dC(f)): f ϵ Ӻ}

is characterized by a comparison of the relative positions of the points of B and C. A region of the form (*) is always closed and doubly-starred with respect to the points (0, 0) and (1, 1). Conversely, given any closed region in 12, doubly-starred with respect to (0, 0) and (1, 1), there are covering classes B and C such that (*) is exactly that region. All of the results are shown to apply to the dimension of closed sets E. Similar results can be obtained when a finite number of covering classes are considered.

In two dimensions, the notion of dimension is extended to the class M, of functions f(x, y), non-decreasing in x and y, supported on 12 with f(x, y) = 0 for x · y = 0 and f(1, 1) = 1, by the formula

dC(f) = lim/s · t → inf/0 log ∆f(s, t)/log s · t , (s, t) ϵ C

where

∆f(s, t) = V/x, y (f(x+s, y+t) – f(x+s, y) – f(x, y+t) + f(x, t)).

A characterization of the equivalence dC1(f) = dC2(f) for all f ϵ M, is given by comparison of the gaps in the sets of products s · t and quotients s/t, (s, t) ϵ Ci (I = 1, 2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three different categories of flow problems of a fluid containing small particles are being considered here. They are: (i) a fluid containing small, non-reacting particles (Parts I and II); (ii) a fluid containing reacting particles (Parts III and IV); and (iii) a fluid containing particles of two distinct sizes with collisions between two groups of particles (Part V).

Part I

A numerical solution is obtained for a fluid containing small particles flowing over an infinite disc rotating at a constant angular velocity. It is a boundary layer type flow, and the boundary layer thickness for the mixture is estimated. For large Reynolds number, the solution suggests the boundary layer approximation of a fluid-particle mixture by assuming W = Wp. The error introduced is consistent with the Prandtl’s boundary layer approximation. Outside the boundary layer, the flow field has to satisfy the “inviscid equation” in which the viscous stress terms are absent while the drag force between the particle cloud and the fluid is still important. Increase of particle concentration reduces the boundary layer thickness and the amount of mixture being transported outwardly is reduced. A new parameter, β = 1/Ω τv, is introduced which is also proportional to μ. The secondary flow of the particle cloud depends very much on β. For small values of β, the particle cloud velocity attains its maximum value on the surface of the disc, and for infinitely large values of β, both the radial and axial particle velocity components vanish on the surface of the disc.

Part II

The “inviscid” equation for a gas-particle mixture is linearized to describe the flow over a wavy wall. Corresponding to the Prandtl-Glauert equation for pure gas, a fourth order partial differential equation in terms of the velocity potential ϕ is obtained for the mixture. The solution is obtained for the flow over a periodic wavy wall. For equilibrium flows where λv and λT approach zero and frozen flows in which λv and λT become infinitely large, the flow problem is basically similar to that obtained by Ackeret for a pure gas. For finite values of λv and λT, all quantities except v are not in phase with the wavy wall. Thus the drag coefficient CD is present even in the subsonic case, and similarly, all quantities decay exponentially for supersonic flows. The phase shift and the attenuation factor increase for increasing particle concentration.

Part III

Using the boundary layer approximation, the initial development of the combustion zone between the laminar mixing of two parallel streams of oxidizing agent and small, solid, combustible particles suspended in an inert gas is investigated. For the special case when the two streams are moving at the same speed, a Green’s function exists for the differential equations describing first order gas temperature and oxidizer concentration. Solutions in terms of error functions and exponential integrals are obtained. Reactions occur within a relatively thin region of the order of λD. Thus, it seems advantageous in the general study of two-dimensional laminar flame problems to introduce a chemical boundary layer of thickness λD within which reactions take place. Outside this chemical boundary layer, the flow field corresponds to the ordinary fluid dynamics without chemical reaction.

Part IV

The shock wave structure in a condensing medium of small liquid droplets suspended in a homogeneous gas-vapor mixture consists of the conventional compressive wave followed by a relaxation region in which the particle cloud and gas mixture attain momentum and thermal equilibrium. Immediately following the compressive wave, the partial pressure corresponding to the vapor concentration in the gas mixture is higher than the vapor pressure of the liquid droplets and condensation sets in. Farther downstream of the shock, evaporation appears when the particle temperature is raised by the hot surrounding gas mixture. The thickness of the condensation region depends very much on the latent heat. For relatively high latent heat, the condensation zone is small compared with ɅD.

For solid particles suspended initially in an inert gas, the relaxation zone immediately following the compression wave consists of a region where the particle temperature is first being raised to its melting point. When the particles are totally melted as the particle temperature is further increased, evaporation of the particles also plays a role.

The equilibrium condition downstream of the shock can be calculated and is independent of the model of the particle-gas mixture interaction.

Part V

For a gas containing particles of two distinct sizes and satisfying certain conditions, momentum transfer due to collisions between the two groups of particles can be taken into consideration using the classical elastic spherical ball model. Both in the relatively simple problem of normal shock wave and the perturbation solutions for the nozzle flow, the transfer of momentum due to collisions which decreases the velocity difference between the two groups of particles is clearly demonstrated. The difference in temperature as compared with the collisionless case is quite negligible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Several approximate Hartree-Fock SCF wavefunctions for the ground electronic state of the water molecule have been obtained using an increasing number of multicenter s, p, and d Slater-type atomic orbitals as basis sets. The predicted charge distribution has been extensively tested at each stage by calculating the electric dipole moment, molecular quadrupole moment, diamagnetic shielding, Hellmann-Feynman forces, and electric field gradients at both the hydrogen and the oxygen nuclei. It was found that a carefully optimized minimal basis set suffices to describe the electronic charge distribution adequately except in the vicinity of the oxygen nucleus. Our calculations indicate, for example, that the correct prediction of the field gradient at this nucleus requires a more flexible linear combination of p-orbitals centered on this nucleus than that in the minimal basis set. Theoretical values for the molecular octopole moment components are also reported.

Part II

The perturbation-variational theory of R. M. Pitzer for nuclear spin-spin coupling constants is applied to the HD molecule. The zero-order molecular orbital is described in terms of a single 1s Slater-type basis function centered on each nucleus. The first-order molecular orbital is expressed in terms of these two functions plus one singular basis function each of the types e-r/r and e-r ln r centered on one of the nuclei. The new kinds of molecular integrals were evaluated to high accuracy using numerical and analytical means. The value of the HD spin-spin coupling constant calculated with this near-minimal set of basis functions is JHD = +96.6 cps. This represents an improvement over the previous calculated value of +120 cps obtained without using the logarithmic basis function but is still considerably off in magnitude compared with the experimental measurement of JHD = +43 0 ± 0.5 cps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation is concerned with the notion of concentrated loads in classical elastostatics and related issues. Following a limit treatment of problems involving concentrated internal and surface loads, the orders of the ensuing displacements and stress singularities, as well as the stress resultants of the latter, are determined. These conclusions are taken as a basis for an alternative direct formulation of concentrated-load problems, the completeness of which is established through an appropriate uniqueness theorem. In addition, the present work supplies a reciprocal theorem and an integral representation-theorem applicable to singular problems of the type under consideration. Finally, in the course of the analysis presented here, the theory of Green's functions in elastostatics is extended.