11 resultados para compressive
em CaltechTHESIS
Resumo:
We consider the radially symmetric nonlinear von Kármán plate equations for circular or annular plates in the limit of small thickness. The loads on the plate consist of a radially symmetric pressure load and a uniform edge load. The dependence of the steady states on the edge load and thickness is studied using asymptotics as well as numerical calculations. The von Kármán plate equations are a singular perturbation of the Fӧppl membrane equation in the asymptotic limit of small thickness. We study the role of compressive membrane solutions in the small thickness asymptotic behavior of the plate solutions.
We give evidence for the existence of a singular compressive solution for the circular membrane and show by a singular perturbation expansion that the nonsingular compressive solution approach this singular solution as the radial stress at the center of the plate vanishes. In this limit, an infinite number of folds occur with respect to the edge load. Similar behavior is observed for the annular membrane with zero edge load at the inner radius in the limit as the circumferential stress vanishes.
We develop multiscale expansions, which are asymptotic to members of this family for plates with edges that are elastically supported against rotation. At some thicknesses this approximation breaks down and a boundary layer appears at the center of the plate. In the limit of small normal load, the points of breakdown approach the bifurcation points corresponding to buckling of the nondeflected state. A uniform asymptotic expansion for small thickness combining the boundary layer with a multiscale approximation of the outer solution is developed for this case. These approximations complement the well known boundary layer expansions based on tensile membrane solutions in describing the bending and stretching of thin plates. The approximation becomes inconsistent as the clamped state is approached by increasing the resistance against rotation at the edge. We prove that such an expansion for the clamped circular plate cannot exist unless the pressure load is self-equilibrating.
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.
We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.
We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.
We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
The development of Ring Opening Metathesis Polymerization has allowed the world of block copolymers to expand into brush block copolymers. Brush block copolymers consist of a polymer backbone with polymeric side chains, forcing the backbone to hold a stretched conformation and giving it a worm-like shape. These brush block copolymers have a number of advantages over tradition block copolymers, including faster self-assembly behavior, larger domain sizes, and much less entanglement. This makes them an ideal candidate in the development of a bottom-up approach to forming photonic crystals. Photonic crystals are periodic nanostructures that transmit and reflect only certain wavelengths of light, forming a band gap. These are used in a number of coatings and other optical uses. One and two dimensional photonic crystals are commercially available, though are often expensive and difficult to manufacture. Previous work has focused on the creation of one dimensional photonic crystals from brush block copolymers. In this thesis, I will focus on the synthesis and characterization of asymmetric brush block copolymers for self-assembly into two and three dimensional photonic crystals. Three series of brush block copolymers were made and characterized by Gel Permeation Chromatography and Nuclear Magnetic Resonance spectroscopy. They were then made into films through compressive thermal annealing and characterized by UV-Vis Spectroscopy and Scanning Electron Microscopy. Evidence of non-lamellar structures were seen, indicating the first reported creation of two or three dimensional photonic crystals from brush block copolymers.
Resumo:
This work is divided into two independent papers.
PAPER 1.
Spall velocities were measured for nine experimental impacts into San Marcos gabbro targets. Impact velocities ranged from 1 to 6.5 km/sec. Projectiles were iron, aluminum, lead, and basalt of varying sizes. The projectile masses ranged from a 4 g lead bullet to a 0.04 g aluminum sphere. The velocities of fragments were measured from high-speed films taken of the events. The maximum spall velocity observed was 30 m/sec, or 0.56 percent of the 5.4 km/sec impact velocity. The measured velocities were compared to the spall velocities predicted by the spallation model of Melosh (1984). The compatibility between the spallation model for large planetary impacts and the results of these small scale experiments are considered in detail.
The targets were also bisected to observe the pattern of internal fractures. A series of fractures were observed, whose location coincided with the boundary between rock subjected to the peak shock compression and a theoretical "near surface zone" predicted by the spallation model. Thus, between this boundary and the free surface, the target material should receive reduced levels of compressive stress as compared to the more highly shocked region below.
PAPER 2.
Carbonate samples from the nuclear explosion crater, OAK, and a terrestrial impact crater, Meteor Crater, were analyzed for shock damage using electron para- magnetic resonance, EPR. The first series of samples for OAK Crater were obtained from six boreholes within the crater, and the second series were ejecta samples recovered from the crater floor. The degree of shock damage in the carbonate material was assessed by comparing the sample spectra to spectra of Solenhofen limestone, which had been shocked to known pressures.
The results of the OAK borehole analysis have identified a thin zone of highly shocked carbonate material underneath the crater floor. This zone has a maximum depth of approximately 200 ft below sea floor at the ground zero borehole and decreases in depth towards the crater rim. A layer of highly shocked material is also found on the surface in the vicinity of the reference bolehole, located outside the crater. This material could represent a fallout layer. The ejecta samples have experienced a range of shock pressures.
It was also demonstrated that the EPR technique is feasible for the study of terrestrial impact craters formed in carbonate bedrock. The results for the Meteor Crater analysis suggest a slight degree of shock damage present in the β member of the Kaibab Formation exposed in the crater walls.
Resumo:
This thesis consists of three parts. Chapter 2 deals with the dynamic buckling behavior of steel braces under cyclic axial end displacement. Braces under such a loading condition belong to a class of "acceleration magnifying" structural components, in which a small motion at the loading points can cause large internal acceleration and inertia. This member-level inertia is frequently ignored in current studies of braces and braced structures. This chapter shows that, under certain conditions, the inclusion of the member-level inertia can lead to brace behavior fundamentally different from that predicted by the quasi-static method. This result is to have significance in the correct use of the quasi-static, pseudo-dynamic and static condensation methods in the simulation of braces or braced structures under dynamic loading. The strain magnitude and distribution in the braces are also studied in this chapter.
Chapter 3 examines the effect of column uplift on the earthquake response of braced steel frames and explores the feasibility of flexible column-base anchoring. It is found that fully anchored braced-bay columns can induce extremely large internal forces in the braced-bay members and their connections, thus increasing the risk of failures observed in recent earthquakes. Flexible braced-bay column anchoring can significantly reduce the braced bay member force, but at the same time also introduces large story drift and column uplift. The pounding of an uplifting column with its support can result in very high compressive axial force.
Chapter 4 conducts a comparative study on the effectiveness of a proposed non-buckling bracing system and several conventional bracing systems. The non-buckling bracing system eliminates buckling and thus can be composed of small individual braces distributed widely in a structure to reduce bracing force concentration and increase redundancy. The elimination of buckling results in a significantly more effective bracing system compared with the conventional bracing systems. Among the conventional bracing systems, bracing configurations and end conditions for the bracing members affect the effectiveness.
The studies in Chapter 3 and Chapter 4 also indicate that code-designed conventionally braced steel frames can experience unacceptably severe response under the strong ground motions recorded during the recent Northridge and Kobe earthquakes.
Resumo:
The Johnny Lyon Hills area is located in Cochise County in southeastern Arizona. The rocks of the area include a central core of Lower pre-Cambrian igneous and metamorphic rocks surrounded by a complexly faulted and tilted section of Upper pre-Cambrian and Paleozoic strata. Limited exposures of Mesozoic and Tertiary sedimentary and volcanic rocks are present at the north end of the map area. Late Tertiary and Quaternary alluvium almost completely surrounds and overlaps upon the older rocks.
The older pre-Cambrian rocks include a section of more than 9000 feet of generally moderately metamorphosed graywackes, slates and conglomerates of the Pinal schist injected in zones by somewhat younger rnyolite sheets. The original sediments were deposited in a geosyncline whose extent probably included large parts of Arizona, New Mexico and west Texas. During the Mazatzal Revolution the Pinal schist was deformed into northeast-trending, steeply dipping and plunging structures and the entire local section was overturned steeply toward the northwest. The pre-Cambrian Johnny Lyon granodiorite was emplaced as a large epi-tectonic pluton which modified the metamorphic character of part of the Pinal schist. Larsen method determinations indicate an age of about 715 million years for this rock, which is about the minimum age compatible with the geologic relations.
The Laramide orogeny produced numerous major thrust faults in the area involving all rocks older than and including the Lower Cretaceous Bisbee group. Major compression from the southwest and subsequent superimposed thrusting from the southeast and east are indicated. Minimum thrust displacements of more than a mile are clear and the probable displacements are of much greater magnitude. The crystalline core behaved as a single structural unit and probably caused important local divergences from the regional pattern of northeast-trending compressive forces. The massif was rotated as a unit 40 degrees or more about a northwest-trending axis overturning the pre-Cambrian fold axes in the Pinal schist.
Swarms of Late Cretaceous(?) or Early Tertiary(?) lamprophyric dikes cross the Laramide structures and are probably related to the large Texas Canyon stock several miles southeast of the map area. Intermittent high angle faulting, both older and younger than the dikes, has continued since the Laramide orogeny and has been superimposed on the older structures. This steep faulting combined with the fundamental northwesterly Laramide structural grain to produce the northwesterly trends characteristic of the mountain ridges and valleys of the area.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.
Resumo:
Three different categories of flow problems of a fluid containing small particles are being considered here. They are: (i) a fluid containing small, non-reacting particles (Parts I and II); (ii) a fluid containing reacting particles (Parts III and IV); and (iii) a fluid containing particles of two distinct sizes with collisions between two groups of particles (Part V).
Part I
A numerical solution is obtained for a fluid containing small particles flowing over an infinite disc rotating at a constant angular velocity. It is a boundary layer type flow, and the boundary layer thickness for the mixture is estimated. For large Reynolds number, the solution suggests the boundary layer approximation of a fluid-particle mixture by assuming W = Wp. The error introduced is consistent with the Prandtl’s boundary layer approximation. Outside the boundary layer, the flow field has to satisfy the “inviscid equation” in which the viscous stress terms are absent while the drag force between the particle cloud and the fluid is still important. Increase of particle concentration reduces the boundary layer thickness and the amount of mixture being transported outwardly is reduced. A new parameter, β = 1/Ω τv, is introduced which is also proportional to μ. The secondary flow of the particle cloud depends very much on β. For small values of β, the particle cloud velocity attains its maximum value on the surface of the disc, and for infinitely large values of β, both the radial and axial particle velocity components vanish on the surface of the disc.
Part II
The “inviscid” equation for a gas-particle mixture is linearized to describe the flow over a wavy wall. Corresponding to the Prandtl-Glauert equation for pure gas, a fourth order partial differential equation in terms of the velocity potential ϕ is obtained for the mixture. The solution is obtained for the flow over a periodic wavy wall. For equilibrium flows where λv and λT approach zero and frozen flows in which λv and λT become infinitely large, the flow problem is basically similar to that obtained by Ackeret for a pure gas. For finite values of λv and λT, all quantities except v are not in phase with the wavy wall. Thus the drag coefficient CD is present even in the subsonic case, and similarly, all quantities decay exponentially for supersonic flows. The phase shift and the attenuation factor increase for increasing particle concentration.
Part III
Using the boundary layer approximation, the initial development of the combustion zone between the laminar mixing of two parallel streams of oxidizing agent and small, solid, combustible particles suspended in an inert gas is investigated. For the special case when the two streams are moving at the same speed, a Green’s function exists for the differential equations describing first order gas temperature and oxidizer concentration. Solutions in terms of error functions and exponential integrals are obtained. Reactions occur within a relatively thin region of the order of λD. Thus, it seems advantageous in the general study of two-dimensional laminar flame problems to introduce a chemical boundary layer of thickness λD within which reactions take place. Outside this chemical boundary layer, the flow field corresponds to the ordinary fluid dynamics without chemical reaction.
Part IV
The shock wave structure in a condensing medium of small liquid droplets suspended in a homogeneous gas-vapor mixture consists of the conventional compressive wave followed by a relaxation region in which the particle cloud and gas mixture attain momentum and thermal equilibrium. Immediately following the compressive wave, the partial pressure corresponding to the vapor concentration in the gas mixture is higher than the vapor pressure of the liquid droplets and condensation sets in. Farther downstream of the shock, evaporation appears when the particle temperature is raised by the hot surrounding gas mixture. The thickness of the condensation region depends very much on the latent heat. For relatively high latent heat, the condensation zone is small compared with ɅD.
For solid particles suspended initially in an inert gas, the relaxation zone immediately following the compression wave consists of a region where the particle temperature is first being raised to its melting point. When the particles are totally melted as the particle temperature is further increased, evaporation of the particles also plays a role.
The equilibrium condition downstream of the shock can be calculated and is independent of the model of the particle-gas mixture interaction.
Part V
For a gas containing particles of two distinct sizes and satisfying certain conditions, momentum transfer due to collisions between the two groups of particles can be taken into consideration using the classical elastic spherical ball model. Both in the relatively simple problem of normal shock wave and the perturbation solutions for the nozzle flow, the transfer of momentum due to collisions which decreases the velocity difference between the two groups of particles is clearly demonstrated. The difference in temperature as compared with the collisionless case is quite negligible.
Resumo:
FRAME3D, a program for the nonlinear seismic analysis of steel structures, has previously been used to study the collapse mechanisms of steel buildings up to 20 stories tall. The present thesis is inspired by the need to conduct similar analysis for much taller structures. It improves FRAME3D in two primary ways.
First, FRAME3D is revised to address specific nonlinear situations involving large displacement/rotation increments, the backup-subdivide algorithm, element failure, and extremely narrow joint hysteresis. The revisions result in superior convergence capabilities when modeling earthquake-induced collapse. The material model of a steel fiber is also modified to allow for post-rupture compressive strength.
Second, a parallel FRAME3D (PFRAME3D) is developed. The serial code is optimized and then parallelized. A distributed-memory divide-and-conquer approach is used for both the global direct solver and element-state updates. The result is an implicit finite-element hybrid-parallel program that takes advantage of the narrow-band nature of very tall buildings and uses nearest-neighbor-only communication patterns.
Using three structures of varied sized, PFRAME3D is shown to compute reproducible results that agree with that of the optimized 1-core version (displacement time-history response root-mean-squared errors are ~〖10〗^(-5) m) with much less wall time (e.g., a dynamic time-history collapse simulation of a 60-story building is computed in 5.69 hrs with 128 cores—a speedup of 14.7 vs. the optimized 1-core version). The maximum speedups attained are shown to increase with building height (as the total number of cores used also increases), and the parallel framework can be expected to be suitable for buildings taller than the ones presented here.
PFRAME3D is used to analyze a hypothetical 60-story steel moment-frame tube building (fundamental period of 6.16 sec) designed according to the 1994 Uniform Building Code. Dynamic pushover and time-history analyses are conducted. Multi-story shear-band collapse mechanisms are observed around mid-height of the building. The use of closely-spaced columns and deep beams is found to contribute to the building's “somewhat brittle” behavior (ductility ratio ~2.0). Overall building strength is observed to be sensitive to whether a model is fracture-capable.