13 resultados para safer speeds
em CaltechTHESIS
Resumo:
Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.
The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.
When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.
For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.
Resumo:
Heparin has been used as an anticoagulant drug for more than 70 years. The global distribution of contaminated heparin in 2007, which resulted in adverse clinical effects and over 100 deaths, emphasizes the necessity for safer alternatives to animal-sourced heparin. The structural complexity and heterogeneity of animal-sourced heparin not only impedes safe access to these biologically active molecules, but also hinders investigations on the significance of structural constituents at a molecular level. Efficient methods for preparing new synthetic heparins with targeted biological activity are necessary not only to ensure clinical safety, but to optimize derivative design to minimize potential side effects. Low molecular weight heparins have become a reliable alternative to heparin, due to their predictable dosages, long half-lives, and reduced side effects. However, heparin oligosaccharide synthesis is a challenging endeavor due to the necessity for complex protecting group manipulation and stereoselective glycosidic linkage chemistry, which often result in lengthy synthetic routes and low yields. Recently, chemoenzymatic syntheses have produced targeted ultralow molecular weight heparins with high-efficiency, but continue to be restricted by the substrate specificities of enzymes.
To address the need for access to homogeneous, complex glycosaminoglycan structures, we have synthesized novel heparan sulfate glycopolymers with well-defined carbohydrate structures and tunable chain length through ring-opening metathesis polymerization chemistry. These polymers recapitulate the key features of anticoagulant heparan sulfate by displaying the sulfation pattern responsible for heparin’s anticoagulant activity. The use of polymerization chemistry greatly simplifies the synthesis of complex glycosaminoglycan structures, providing a facile method to generate homogeneous macromolecules with tunable biological and chemical properties. Through the use of in vitro chromogenic substrate assays and ex vivo clotting assays, we found that the HS glycopolymers exhibited anticoagulant activity in a sulfation pattern and length-dependent manner. Compared to heparin standards, our short polymers did not display any activity. However, our longer polymers were able to incorporate in vitro and ex vivo characteristics of both low-molecular-weight heparin derivatives and heparin, displaying hybrid anticoagulant properties. These studies emphasize the significance of sulfation pattern specificity in specific carbohydrate-protein interactions, and demonstrate the effectiveness of multivalent molecules in recapitulating the activity of natural polysaccharides.
Resumo:
With the size of transistors approaching the sub-nanometer scale and Si-based photonics pinned at the micrometer scale due to the diffraction limit of light, we are unable to easily integrate the high transfer speeds of this comparably bulky technology with the increasingly smaller architecture of state-of-the-art processors. However, we find that we can bridge the gap between these two technologies by directly coupling electrons to photons through the use of dispersive metals in optics. Doing so allows us to access the surface electromagnetic wave excitations that arise at a metal/dielectric interface, a feature which both confines and enhances light in subwavelength dimensions - two promising characteristics for the development of integrated chip technology. This platform is known as plasmonics, and it allows us to design a broad range of complex metal/dielectric systems, all having different nanophotonic responses, but all originating from our ability to engineer the system surface plasmon resonances and interactions. In this thesis, we demonstrate how plasmonics can be used to develop coupled metal-dielectric systems to function as tunable plasmonic hole array color filters for CMOS image sensing, visible metamaterials composed of coupled negative-index plasmonic coaxial waveguides, and programmable plasmonic waveguide network systems to serve as color routers and logic devices at telecommunication wavelengths.
Resumo:
The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.
Resumo:
Flies are particularly adept at balancing the competing demands of delay tolerance, performance, and robustness during flight, which invites thoughtful examination of their multimodal feedback architecture. This dissertation examines stabilization requirements for inner-loop feedback strategies in the flapping flight of Drosophila, the fruit fly, against the backdrop of sensorimotor transformations present in the animal. Flies have evolved multiple specializations to reduce sensorimotor latency, but sensory delay during flight is still significant on the timescale of body dynamics. I explored the effect of sensor delay on flight stability and performance for yaw turns using a dynamically-scaled robot equipped with a real-time feedback system that performed active turns in response to measured yaw torque. The results show a fundamental tradeoff between sensor delay and permissible feedback gain, and suggest that fast mechanosensory feedback provides a source of active damping that compliments that contributed by passive effects. Presented in the context of these findings, a control architecture whereby a haltere-mediated inner-loop proportional controller provides damping for slower visually-mediated feedback is consistent with tethered-flight measurements, free-flight observations, and engineering design principles. Additionally, I investigated how flies adjust stroke features to regulate and stabilize level forward flight. The results suggest that few changes to hovering kinematics are actually required to meet steady-state lift and thrust requirements at different flight speeds, and the primary driver of equilibrium velocity is the aerodynamic pitch moment. This finding is consistent with prior hypotheses and observations regarding the relationship between body pitch and flight speed in fruit flies. The results also show that the dynamics may be stabilized with additional pitch damping, but the magnitude of required damping increases with flight speed. I posit that differences in stroke deviation between the upstroke and downstroke might play a critical role in this stabilization. Fast mechanosensory feedback of the pitch rate could enable active damping, which would inherently exhibit gain scheduling with flight speed if pitch torque is regulated by adjusting stroke deviation. Such a control scheme would provide an elegant solution for flight stabilization across a wide range of flight speeds.
Resumo:
Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.
Resumo:
The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.
A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.
In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.
Resumo:
Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.
Resumo:
The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.
We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.
Resumo:
Dynamic rupture simulations are unique in their contributions to the study of earthquake physics. The current rapid development of dynamic rupture simulations poses several new questions: Do the simulations reflect the real world? Do the simulations have predictive power? Which one should we believe when the simulations disagree? This thesis illustrates how integration with observations can help address these questions and reduce the effects of non-uniqueness of both dynamic rupture simulations and kinematic inversion problems. Dynamic rupture simulations with observational constraints can effectively identify non-physical features inferred from observations. Moreover, the integrative technique can also provide more physical insights into the mechanisms of earthquakes. This thesis demonstrates two examples of such kinds of integration: dynamic rupture simulations of the Mw 9.0 2011 Tohoku-Oki earthquake and of earthquake ruptures in damaged fault zones:
(1) We develop simulations of the Tohoku-Oki earthquake based on a variety of observations and minimum assumptions of model parameters. The simulations provide realistic estimations of stress drop and fracture energy of the region and explain the physical mechanisms of high-frequency radiation in the deep region. We also find that the overridding subduction wedge contributes significantly to the up-dip rupture propagation and large final slip in the shallow region. Such findings are also applicable to other megathrust earthquakes.
(2) Damaged fault zones are usually found around natural faults, but their effects on earthquake ruptures have been largely unknown. We simulate earthquake ruptures in damaged fault zones with material properties constrained by seismic and geological observations. We show that reflected waves in fault zones are effective at generating pulse-like ruptures and head waves tend to accelerate and decelerate rupture speeds. These mechanisms are robust in natural fault zones with large attenuation and off-fault plasticity. Moreover, earthquakes in damaged fault zones can propagate at super-Rayleigh speeds that are unstable in homogeneous media. Supershear transitions in fault zones do not require large fault stresses. In the end, we present observations in the Big Bear region, where variability of rupture speeds of small earthquakes correlates with the laterally variable materials in a damaged fault zone.
Resumo:
n-heptane/air premixed turbulent flames in the high-Karlovitz portion of the thin reaction zone regime are characterized and modeled in this thesis using Direct Numerical Simulations (DNS) with detailed chemistry. In order to perform these simulations, a time-integration scheme that can efficiently handle the stiffness of the equations solved is developed first. A first simulation with unity Lewis number is considered in order to assess the effect of turbulence on the flame in the absence of differential diffusion. A second simulation with non-unity Lewis numbers is considered to study how turbulence affects differential diffusion. In the absence of differential diffusion, minimal departure from the 1D unstretched flame structure (species vs. temperature profiles) is observed. In the non-unity Lewis number case, the flame structure lies between that of 1D unstretched flames with "laminar" non-unity Lewis numbers and unity Lewis number. This is attributed to effective Lewis numbers resulting from intense turbulent mixing and a first model is proposed. The reaction zone is shown to be thin for both flames, yet large chemical source term fluctuations are observed. The fuel consumption rate is found to be only weakly correlated with stretch, although local extinctions in the non-unity Lewis number case are well correlated with high curvature. These results explain the apparent turbulent flame speeds. Other variables that better correlate with this fuel burning rate are identified through a coordinate transformation. It is shown that the unity Lewis number turbulent flames can be accurately described by a set of 1D (in progress variable space) flamelet equations parameterized by the dissipation rate of the progress variable. In the non-unity Lewis number flames, the flamelet equations suggest a dependence on a second parameter, the diffusion of the progress variable. A new tabulation approach is proposed for the simulation of such flames with these dimensionally-reduced manifolds.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.