49 resultados para pH condition

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.

A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.

In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.

The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.

The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluid diffusion in glassy polymers proceeds in ways that are not explained by the standard diffusion model. Although the reasons for the anomalous effects are not known, much of the observed behavior is attributed to the long times that polymers below their glass transition temperature take to adjust to changes in their condition. The slow internal relaxations of the polymer chains ensure that the material properties are history-dependent, and also allow both local inhomogeneities and differential swelling to occur. Two models are developed in this thesis with the intent of accounting for these effects in the diffusion process.

In Part I, a model is developed to account for both the history dependence of the glassy polymer, and the dual sorption which occurs when gas molecules are immobilized by the local heterogeneities. A preliminary study of a special case of this model is conducted, showing the existence of travelling wave solutions and using perturbation techniques to investigate the effect of generalized diffusion mechanisms on their form. An integral averaging method is used to estimate the penetrant front position.

In Part II, a model is developed for particle diffusion along with displacements in isotropic viscoelastic materials. The nonlinear dependence of the materials on the fluid concentration is taken into account, while pure displacements are assumed to remain in the range of linear viscoelasticity. A fairly general model is obtained for three-dimensional irrotational movements, with the development of the model being based on the assumptions of irreversible thermodynamics. With the help of some dimensional analysis, this model is simplified to a version which is proposed to be studied for Case II behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The question of finding variational principles for coupled systems of first order partial differential equations is considered. Using a potential representation for solutions of the first order system a higher order system is obtained. Existence of a variational principle follows if the original system can be transformed to a self-adjoint higher order system. Existence of variational principles for all linear wave equations with constant coefficients having real dispersion relations is established. The method of adjoining some of the equations of the original system to a suitable Lagrangian function by the method of Lagrange multipliers is used to construct new variational principles for a class of linear systems. The equations used as side conditions must satisfy highly-restrictive integrability conditions. In the more difficult nonlinear case the system of two equations in two independent variables can be analyzed completely. For systems determined by two conservation laws the side condition must be a conservation law in addition to satisfying the integrability conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the slow viscous flow of a gas past a sphere is considered. The fluid cannot be treated incompressible in the limit when the Reynolds number Re, and the Mach number M, tend to zero in such a way that Re ~ o(M^2 ). In this case, the lowest order approximation to the steady Navier-Stokes equations of motion leads to a paradox discovered by Lagerstrom and Chester. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme that takes into account certain terms in the full Navier-Stokes equations that drop out in the approximation used by the above authors. It is found however that the drag predicted by the theory does not agree with R. A. Millikan's classic experiments on sphere drag.

The whole question of the applicability of the Navier-Stokes theory when the Knudsen number M/Re is not small is examined. A new slip condition is proposed. The idea that the Navier-Stokes equations coupled with this condition may adequately describe small Reynolds number flows when the Knudsen number is not too large is looked at in some detail. First, a general discussion of asymptotic solutions of the equations for all such flows is given. The theory is then applied to several concrete problems of fluid motion. The deductions from this theory appear to interpret and summarize the results of Millikan over a much wider range of Knudsen numbers (almost up to the free molecular or kinetic limit) than hitherto Believed possible by a purely continuum theory. Further experimental tests are suggested and certain interesting applications to the theory of dilute suspensions in gases are noted. Some of the questions raised in the main body of the work are explored further in the appendices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a sphere immersed in a rarefied monatomic gas with zero mean flow. The distribution function of the molecules at infinity is chosen to be a Maxwellian. The boundary condition at the body is diffuse reflection with perfect accommodation to the surface temperature. The microscopic flow of particles about the sphere is modeled kinetically by the Boltzmann equation with the Krook collision term. Appropriate normalizations in the near and far fields lead to a perturbation solution of the problem, expanded in terms of the ratio of body diameter to mean free path (inverse Knudsen number). The distribution function is found directly in each region, and intermediate matching is demonstrated. The heat transfer from the sphere is then calculated as an integral over this distribution function in the inner region. Final results indicate that the heat transfer may at first increase over its free flow value before falling to the continuum level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.

Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.

The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.

The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.

The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.

The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The simplest multiplicative systems in which arithmetical ideas can be defined are semigroups. For such systems irreducible (prime) elements can be introduced and conditions under which the fundamental theorem of arithmetic holds have been investigated (Clifford (3)). After identifying associates, the elements of the semigroup form a partially ordered set with respect to the ordinary division relation. This suggests the possibility of an analogous arithmetical result for abstract partially ordered sets. Although nothing corresponding to product exists in a partially ordered set, there is a notion similar to g.c.d. This is the meet operation, defined as greatest lower bound. Thus irreducible elements, namely those elements not expressible as meets of proper divisors can be introduced. The assumption of the ascending chain condition then implies that each element is representable as a reduced meet of irreducibles. The central problem of this thesis is to determine conditions on the structure of the partially ordered set in order that each element have a unique such representation.

Part I contains preliminary results and introduces the principal tools of the investigation. In the second part, basic properties of the lattice of ideals and the connection between its structure and the irreducible decompositions of elements are developed. The proofs of these results are identical with the corresponding ones for the lattice case (Dilworth (2)). The last part contains those results whose proofs are peculiar to partially ordered sets and also contains the proof of the main theorem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.