15 resultados para Actor. Receiver. Reception. Presence. Representation

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is in two parts. In the first part we give a qualitative study of wave propagation in an inhomogeneous medium principally by geometrical optics and ray theory. The inhomogeneity is represented by a sound-speed profile which is dependent upon one coordinate, namely the depth; and we discuss the general characteristics of wave propagation which result from a source placed on the sound channel axis. We show that our mathematical model of the sound- speed in the ocean actually predicts some of the behavior of the observed physical phenomena in the underwater sound channel. Using ray theoretic techniques we investigate the implications of our profile on the following characteristics of SOFAR propagation: (i) the sound energy traveling further away from the axis takes less time to travel from source to receiver than sound energy traveling closer to the axis, (ii) the focusing of sound energy in the sound channel at certain ranges, (iii) the overall ray picture in the sound channel.

In the second part a more penetrating quantitative study is done by means of analytical techniques on the governing equations. We study the transient problem for the Epstein profile by employing a double transform to formally derive an integral representation for the acoustic pressure amplitude, and from this representation we obtain several alternative representations. We study the case where both source and receiver are on the channel axis and greatly separated. In particular we verify some of the earlier results derived by ray theory and obtain asymptotic results for the acoustic pressure in the far-field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurons in the songbird forebrain nucleus HVc are highly sensitive to auditory temporal context and have some of the most complex auditory tuning properties yet discovered. HVc is crucial for learning, perceiving, and producing song, thus it is important to understand the neural circuitry and mechanisms that give rise to these remarkable auditory response properties. This thesis investigates these issues experimentally and computationally.

Extracellular studies reported here compare the auditory context sensitivity of neurons in HV c with neurons in the afferent areas of field L. These demonstrate that there is a substantial increase in the auditory temporal context sensitivity from the areas of field L to HVc. Whole-cell recordings of HVc neurons from acute brain slices are described which show that excitatory synaptic transmission between HVc neurons involve the release of glutamate and the activation of both AMPA/kainate and NMDA-type glutamate receptors. Additionally, widespread inhibitory interactions exist between HVc neurons that are mediated by postsynaptic GABA_A receptors. Intracellular recordings of HVc auditory neurons in vivo provides evidence that HV c neurons encode information about temporal structure using a variety of cellular and synaptic mechanisms including syllable-specific inhibition, excitatory post-synaptic potentials with a range of different time courses, and burst-firing, and song-specific hyperpolarization.

The final part of this thesis presents two computational approaches for representing and learning temporal structure. The first method utilizes comput ational elements that are analogous to temporal combination sensitive neurons in HVc. A network of these elements can learn using local information and lateral inhibition. The second method presents a more general framework which allows a network to discover mixtures of temporal features in a continuous stream of input.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancellation of interfering frequency-modulated (FM) signals is investigated with emphasis towards applications on the cellular telephone channel as an important example of a multiple access communications system. In order to fairly evaluate analog FM multiaccess systems with respect to more complex digital multiaccess systems, a serious attempt to mitigate interference in the FM systems must be made. Information-theoretic results in the field of interference channels are shown to motivate the estimation and subtraction of undesired interfering signals. This thesis briefly examines the relative optimality of the current FM techniques in known interference channels, before pursuing the estimation and subtracting of interfering FM signals.

The capture-effect phenomenon of FM reception is exploited to produce simple interference-cancelling receivers with a cross-coupled topology. The use of phase-locked loop receivers cross-coupled with amplitude-tracking loops to estimate the FM signals is explored. The theory and function of these cross-coupled phase-locked loop (CCPLL) interference cancellers are examined. New interference cancellers inspired by optimal estimation and the CCPLL topology are developed, resulting in simpler receivers than those in prior art. Signal acquisition and capture effects in these complex dynamical systems are explained using the relationship of the dynamical systems to adaptive noise cancellers.

FM interference-cancelling receivers are considered for increasing the frequency reuse in a cellular telephone system. Interference mitigation in the cellular environment is seen to require tracking of the desired signal during time intervals when it is not the strongest signal present. Use of interference cancelling in conjunction with dynamic frequency-allocation algorithms is viewed as a way of improving spectrum efficiency. Performance of interference cancellers indicates possibilities for greatly increased frequency reuse. The economics of receiver improvements in the cellular system is considered, including both the mobile subscriber equipment and the provider's tower (base station) equipment.

The thesis is divided into four major parts and a summary: the introduction, motivations for the use of interference cancellation, examination of the CCPLL interference canceller, and applications to the cellular channel. The parts are dependent on each other and are meant to be read as a whole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.

The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.

A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.

High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first experimental evidence that the heat capacity of superfluid 4He, at temperatures very close to the lambda transition temperature, Tλ,is enhanced by a constant heat flux, Q. The heat capacity at constant Q, CQ,is predicted to diverge at a temperature Tc(Q) < Tλ at which superflow becomes unstable. In agreement with previous measurements, we find that dissipation enters our cell at a temperature, TDAS(Q),below the theoretical value, Tc(Q). Our measurements of CQ were taken using the discrete pulse method at fourteen different heat flux values in the range 1µW/cm2 ≤ Q≤ 4µW /cm2. The excess heat capacity ∆CQ we measure has the predicted scaling behavior as a function of T and Q:∆CQ • tα ∝ (Q/Qc)2, where QcT) ~ t is the critical heat current that results from the inversion of the equation for Tc(Q). We find that if the theoretical value of Tc( Q) is correct, then ∆CQ is considerably larger than anticipated. On the other hand,if Tc(Q)≈ TDAS(Q),then ∆CQ is the same magnitude as the theoretically predicted enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electron diffraction investigation of the following compounds has been carried out: sulfur, sulfur nitride, realgar, arsenic trisulfide, spiropentane, dimethyltrisulfide, cis and trans lewisite, methylal, and ethylene glycol.

The crystal structures of the following salts have been determined by x-ray diffraction: silver molybdateand hydrazinium dichloride.

Suggested revisions of the covalent radii for B, Si, P, Ge, As, Sn, Sb, and Pb have been made, and values for the covalent radii of Al, Ga, In, Ti, and Bi have been proposed.

The Schomaker-Stevenson revision of the additivity rule for single covalent bond distances has been used in conjunction with the revised radii. Agreement with experiment is in general better with the revised radii than with the former radii and additivity.

The principle of ionic bond character in addition to that present in a normal covalent bond has been applied to the observed structures of numerous molecules. It leads to a method of interpretation which is at least as consistent as the theory of multiple bond formation.

The revision of the additivity rule has been extended to double bonds. An encouraging beginning along these lines has been made, but additional experimental data are needed for clarification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

A study of the thermal reaction of water vapor and parts-per-million concentrations of nitrogen dioxide was carried out at ambient temperature and at atmospheric pressure. Nitric oxide and nitric acid vapor were the principal products. The initial rate of disappearance of nitrogen dioxide was first order with respect to water vapor and second order with respect to nitrogen dioxide. An initial third-order rate constant of 5.5 (± 0.29) x 104 liter2 mole-2 sec-1 was found at 25˚C. The rate of reaction decreased with increasing temperature. In the temperature range of 25˚C to 50˚C, an activation energy of -978 (± 20) calories was found.

The reaction did not go to completion. From measurements as the reaction approached equilibrium, the free energy of nitric acid vapor was calculated. This value was -18.58 (± 0.04) kilocalories at 25˚C.

The initial rate of reaction was unaffected by the presence of oxygen and was retarded by the presence of nitric oxide. There were no appreciable effects due to the surface of the reactor. Nitric oxide and nitrogen dioxide were monitored by gas chromatography during the reaction.

Part II

The air oxidation of nitric oxide, and the oxidation of nitric oxide in the presence of water vapor, were studied in a glass reactor at ambient temperatures and at atmospheric pressure. The concentration of nitric oxide was less than 100 parts-per-million. The concentration of nitrogen dioxide was monitored by gas chromatography during the reaction.

For the dry oxidation, the third-order rate constant was 1.46 (± 0.03) x 104 liter2 mole-2 sec-1 at 25˚C. The activation energy, obtained from measurements between 25˚C and 50˚C, was -1.197 (±0.02) kilocalories.

The presence of water vapor during the oxidation caused the formation of nitrous acid vapor when nitric oxide, nitrogen dioxide and water vapor combined. By measuring the difference between the concentrations of nitrogen dioxide during the wet and dry oxidations, the rate of formation of nitrous acid vapor was found. The third-order rate constant for the formation of nitrous acid vapor was equal to 1.5 (± 0.5) x 105 liter2 mole-2 sec-1 at 40˚C. The reaction rate did not change measurably when the temperature was increased to 50˚C. The formation of nitric acid vapor was prevented by keeping the concentration of nitrogen dioxide low.

Surface effects were appreciable for the wet tests. Below 35˚C, the rate of appearance of nitrogen dioxide increased with increasing surface. Above 40˚C, the effect of surface was small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the continuation to complex values of the angular momentum of the partial wave amplitude is examined for the simplest production process, that of two particles → three particles. The presence of so-called "anomalous singularities" complicates the procedure followed relative to that used for quasi two-body scattering amplitudes. The anomalous singularities are shown to lead to exchange degenerate amplitudes with possible poles in much the same way as "normal" singularities lead to the usual signatured amplitudes. The resulting exchange-degenerate trajectories would also be expected to occur in two-body amplitudes.

The representation of the production amplitude in terms of the singularities of the partial wave amplitude is then developed and applied to the high energy region, with attention being paid to the emergence of "double Regge" terms. Certain new results are obtained for the behavior of the amplitude at zero momentum transfer, and some predictions of polarization and minima in momentum transfer distributions are made. A calculation of the polarization of the ρo meson in the reaction π - p → π - ρop at high energy with small momentum transfer to the proton is compared with data taken at 25 Gev by W. D. Walker and collaborators. The result is favorable, although limited by the statistics of the available data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the representation of signal envelope is treated, motivated by the classical Hilbert representation in which the envelope is represented in terms of the received signal and its Hilbert transform. It is shown that the Hilbert representation is the proper one if the received signal is strictly bandlimited but that some other filter is more appropriate in the bandunlimited case. A specific alternative filter, the conjugate filter, is proposed and the overall envelope estimation error is evaluated to show that for a specific received signal power spectral density the proposed filter yields a lower envelope error than the Hilbert filter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.

Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.

The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).

The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study was made of the means by which turbulent flows entrain sediment grains from alluvial stream beds. Entrainment was considered to include both the initiation of sediment motion and the suspension of grains by the flow. Observations of grain motion induced by turbulent flows led to the formulation of an entrainment hypothesis. It was based on the concept of turbulent eddies disrupting the viscous sublayer and impinging directly onto the grain surface. It is suggested that entrainment results from the interaction between fluid elements within an eddy and the sediment grains.

A pulsating jet was used to simulate the flow conditions in a turbulent boundary layer. Evidence is presented to establish the validity of this representation. Experiments were made to determine the dependence of jet strength, defined below, upon sediment and fluid properties. For a given sediment and fluid, and fixed jet geometry there were two critical values of jet strength: one at which grains started to roll across the bed, and one at which grains were projected up from the bed. The jet strength K, is a function of the pulse frequency, ω, and the pulse amplitude, A, defined by

K = Aω-s

Where s is the slope of a plot of log A against log ω. Pulse amplitude is equal to the volume of fluid ejected at each pulse divided by the cross sectional area of the jet tube.

Dimensional analysis was used to determine the parameters by which the data from the experiments could be correlated. Based on this, a method was devised for computing the pulse amplitude and frequency necessary either to move or project grains from the bed for any specified fluid and sediment combination.

Experiments made in a laboratory flume with a turbulent flow over a sediment bed are described. Dye injection was used to show the presence, in a turbulent boundary layer, of two important aspects of the pulsating jet model and the impinging eddy hypothesis. These were the intermittent nature of the sublayer and the presence of velocities with vertical components adjacent to the sediment bed.

A discussion of flow conditions, and the resultant grain motion, that occurred over sediment beds of different form is given. The observed effects of the sediment and fluid interaction are explained, in each case, in terms of the entrainment hypothesis.

The study does not suggest that the proposed entrainment mechanism is the only one by which grains can be entrained. However, in the writer’s opinion, the evidence presented strongly suggests that the impingement of turbulent eddies onto a sediment bed plays a dominant role in the process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents methods for incrementally constructing controllers in the presence of uncertainty and nonlinear dynamics. The basic setting is motion planning subject to temporal logic specifications. Broadly, two categories of problems are treated. The first is reactive formal synthesis when so-called discrete abstractions are available. The fragment of linear-time temporal logic (LTL) known as GR(1) is used to express assumptions about an adversarial environment and requirements of the controller. Two problems of changes to a specification are posed that concern the two major aspects of GR(1): safety and liveness. Algorithms providing incremental updates to strategies are presented as solutions. In support of these, an annotation of strategies is developed that facilitates repeated modifications. A variety of properties are proven about it, including necessity of existence and sufficiency for a strategy to be winning. The second category of problems considered is non-reactive (open-loop) synthesis in the absence of a discrete abstraction. Instead, the presented stochastic optimization methods directly construct a control input sequence that achieves low cost and satisfies a LTL formula. Several relaxations are considered as heuristics to address the rarity of sampling trajectories that satisfy an LTL formula and demonstrated to improve convergence rates for Dubins car and single-integrators subject to a recurrence task.