11 resultados para Rough Kernels
em CaltechTHESIS
Resumo:
Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.
Resumo:
Measurements of friction and heat transfer coefficients were obtained with dilute polymer solutions flowing through electrically heated smooth and rough tubes. The polymer used was "Polyox WSR-301", and tests were performed at concentrations of 10 and 50 parts per million. The rough tubes contained a close-packed, granular type of surface with roughness-height-to-diameter ratios of 0.0138 and 0.0488 respectively. A Prandtl number range of 4.38 to 10.3 was investigated which was obtained by adjusting the bulk temperature of the solution. The Reynolds numbers in the experiments were varied from =10,000 (Pr= 10.3) to 250,000 (Pr= 4.38).
Friction reductions as high as 73% in smooth tubes and 83% in rough tubes were observed, accompanied by an even more drastic heat transfer reduction (as high as 84% in smooth tubes and 93% in rough tubes). The heat transfer coefficients with Polyox can be lower for a rough tube than for a smooth one.
The similarity rules previously developed for heat transfer with a Newtonian fluid were extended to dilute polymer solution pipe flows. A velocity profile similar to the one proposed by Deissler was taken as a model to interpret the friction and heat transfer data in smooth tubes. It was found that the observed results could be explained by assuming that the turbulent diffusivities are reduced in smooth tubes in the vicinity of the wall, which brings about a thickening of the viscous layer. A possible mechanism describing the effect of the polymer additive on rough pipe flow is also discussed.
Resumo:
A model equation for water waves has been suggested by Whitham to study, qualitatively at least, the different kinds of breaking. This is an integro-differential equation which combines a typical nonlinear convection term with an integral for the dispersive effects and is of independent mathematical interest. For an approximate kernel of the form e^(-b|x|) it is shown first that solitary waves have a maximum height with sharp crests and secondly that waves which are sufficiently asymmetric break into "bores." The second part applies to a wide class of bounded kernels, but the kernel giving the correct dispersion effects of water waves has a square root singularity and the present argument does not go through. Nevertheless the possibility of the two kinds of breaking in such integro-differential equations is demonstrated.
Difficulties arise in finding variational principles for continuum mechanics problems in the Eulerian (field) description. The reason is found to be that continuum equations in the original field variables lack a mathematical "self-adjointness" property which is necessary for Euler equations. This is a feature of the Eulerian description and occurs in non-dissipative problems which have variational principles for their Lagrangian description. To overcome this difficulty a "potential representation" approach is used which consists of transforming to new (Eulerian) variables whose equations are self-adjoint. The transformations to the velocity potential or stream function in fluids or the scaler and vector potentials in electromagnetism often lead to variational principles in this way. As yet no general procedure is available for finding suitable transformations. Existing variational principles for the inviscid fluid equations in the Eulerian description are reviewed and some ideas on the form of the appropriate transformations and Lagrangians for fluid problems are obtained. These ideas are developed in a series of examples which include finding variational principles for Rossby waves and for the internal waves of a stratified fluid.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
Galaxies evolve throughout the history of the universe from the first star-forming sources, through gas-rich asymmetric structures with rapid star formation rates, to the massive symmetrical stellar systems observed at the present day. Determining the physical processes which drive galaxy formation and evolution is one of the most important questions in observational astrophysics. This thesis presents four projects aimed at improving our understanding of galaxy evolution from detailed measurements of star forming galaxies at high redshift.
We use resolved spectroscopy of gravitationally lensed z ≃ 2 - 3 star forming galaxies to measure their kinematic and star formation properties. The combination of lensing with adaptive optics yields physical resolution of ≃ 100 pc, sufficient to resolve giant Hii regions. We find that ~ 70 % of galaxies in our sample display ordered rotation with high local velocity dispersion indicating turbulent thick disks. The rotating galaxies are gravitationally unstable and are expected to fragment into giant clumps. The size and dynamical mass of giant Hii regions are in agreement with predictions for such clumps indicating that gravitational instability drives the rapid star formation. The remainder of our sample is comprised of ongoing major mergers. Merging galaxies display similar star formation rate, morphology, and local velocity dispersion as isolated sources, but their velocity fields are more chaotic with no coherent rotation.
We measure resolved metallicity in four lensed galaxies at z = 2.0 − 2.4 from optical emission line diagnostics. Three rotating galaxies display radial gradients with higher metallicity at smaller radii, while the fourth is undergoing a merger and has an inverted gradient with lower metallicity at the center. Strong gradients in the rotating galaxies indicate that they are growing inside-out with star formation fueled by accretion of metal-poor gas at large radii. By comparing measured gradients with an appropriate comparison sample at z = 0, we demonstrate that metallicity gradients in isolated galaxies must flatten at later times. The amount of size growth inferred by the gradients is in rough agreement with direct measurements of massive galaxies. We develop a chemical evolution model to interpret these data and conclude that metallicity gradients are established by a gradient in the outflow mass loading factor, combined with radial inflow of metal-enriched gas.
We present the first rest-frame optical spectroscopic survey of a large sample of low-luminosity galaxies at high redshift (L < L*, 1.5 < z < 3.5). This population dominates the star formation density of the universe at high redshifts, yet such galaxies are normally too faint to be studied spectroscopically. We take advantage of strong gravitational lensing magnification to compile observations for a sample of 29 galaxies using modest integration times with the Keck and Palomar telescopes. Balmer emission lines confirm that the sample has a median SFR ∼ 10 M_sun yr^−1 and extends to lower SFR than has been probed by other surveys at similar redshift. We derive the metallicity, dust extinction, SFR, ionization parameter, and dynamical mass from the spectroscopic data, providing the first accurate characterization of the star-forming environment in low-luminosity galaxies at high redshift. For the first time, we directly test the proposal that the relation between galaxy stellar mass, star formation rate, and gas phase metallicity does not evolve. We find lower gas phase metallicity in the high redshift galaxies than in local sources with equivalent stellar mass and star formation rate, arguing against a time-invariant relation. While our result is preliminary and may be biased by measurement errors, this represents an important first measurement that will be further constrained by ongoing analysis of the full data set and by future observations.
We present a study of composite rest-frame ultraviolet spectra of Lyman break galaxies at z = 4 and discuss implications for the distribution of neutral outflowing gas in the circumgalactic medium. In general we find similar spectroscopic trends to those found at z = 3 by earlier surveys. In particular, absorption lines which trace neutral gas are weaker in less evolved galaxies with lower stellar masses, smaller radii, lower luminosity, less dust, and stronger Lyα emission. Typical galaxies are thus expected to have stronger Lyα emission and weaker low-ionization absorption at earlier times, and we indeed find somewhat weaker low-ionization absorption at higher redshifts. In conjunction with earlier results, we argue that the reduced low-ionization absorption is likely caused by lower covering fraction and/or velocity range of outflowing neutral gas at earlier epochs. This result has important implications for the hypothesis that early galaxies were responsible for cosmic reionization. We additionally show that fine structure emission lines are sensitive to the spatial extent of neutral gas, and demonstrate that neutral gas is concentrated at smaller galactocentric radii in higher redshift galaxies.
The results of this thesis present a coherent picture of galaxy evolution at high redshifts 2 ≲ z ≲ 4. Roughly 1/3 of massive star forming galaxies at this period are undergoing major mergers, while the rest are growing inside-out with star formation occurring in gravitationally unstable thick disks. Star formation, stellar mass, and metallicity are limited by outflows which create a circumgalactic medium of metal-enriched material. We conclude by describing some remaining open questions and prospects for improving our understanding of galaxy evolution with future observations of gravitationally lensed galaxies.
Resumo:
The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.
The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.
Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.
Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.
An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
Understanding friction and adhesion in static and sliding contact of surfaces is important in numerous physical phenomena and technological applications. Most surfaces are rough at the microscale, and thus the real area of contact is only a fraction of the nominal area. The macroscopic frictional and adhesive response is determined by the collective behavior of the population of evolving and interacting microscopic contacts. This collective behavior can be very different from the behavior of individual contacts. It is thus important to understand how the macroscopic response emerges from the microscopic one. In this thesis, we develop a theoretical and computational framework to study the collective behavior. Our philosophy is to assume a simple behavior of a single asperity and study the collective response of an ensemble. Our work bridges the existing well-developed studies of single asperities with phenomenological laws that describe macroscopic rate-and-state behavior of frictional interfaces. We find that many aspects of the macroscopic behavior are robust with respect to the microscopic response. This explains why qualitatively similar frictional features are seen for a diverse range of materials. We first show that the collective response of an ensemble of one-dimensional independent viscoelastic elements interacting through a mean field reproduces many qualitative features of static and sliding friction evolution. The resulting macroscopic behavior is different from the microscopic one: for example, even if each contact is velocity-strengthening, the macroscopic behavior can be velocity-weakening. The framework is then extended to incorporate three-dimensional rough surfaces, long- range elastic interactions between contacts, and time-dependent material behaviors such as viscoelasticity and viscoplasticity. Interestingly, the mean field behavior dominates and the elastic interactions, though important from a quantitative perspective, do not change the qualitative macroscopic response. Finally, we examine the effect of adhesion on the frictional response as well as develop a force threshold model for adhesion and mode I interfacial cracks.
Resumo:
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.
The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.
The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.
First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.
In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.
The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.
Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
Resumo:
As a partial fulfillment of the requirements in obtaining a Professional Degree in Geophysical Engineering at the California Institute of Technology. Spontaneous Polarization method of electrical exploration was chosen as the subject of this thesis. It is also known as "self potential electrical prospecting" and "natural currents method."
The object of this thesis is to present a spontaneous polarization exploration work done by the writer, and to apply analytical interpretation methods to these field results.
The writer was confronted with the difficulty of finding the necessary information in a complete paper about this method. The available papers are all too short and repeat the usual information, giving the same examples. The decision was made to write a comprehensive paper first, including the writer's experience, and then to present the main object of the thesis.
The following paper comprises three major parts:
1 - A comprehensive treatment of the spontaneous polarization method.
2 - Report of the field work.
3 - Analytical interpretation of the field work results.
The main reason in choosing this subject is that this method is the most reliable, easiest and requires the least equipment in prospecting for sulphide orebodies on unexplored, rough terrains.
The intention of the writer in compiling the theoretical and analytical information has been mainly to prepare a reference paper about this method.
The writer wishes to express his appreciation to Dr. G. W. Potapenko, Associate Professor of Physics at California Institute of Technology, for his generous help.
Resumo:
The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.
Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.
Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.