12 resultados para Range of Ankle Motion
em CaltechTHESIS
Resumo:
Current earthquake early warning systems usually make magnitude and location predictions and send out a warning to the users based on those predictions. We describe an algorithm that assesses the validity of the predictions in real-time. Our algorithm monitors the envelopes of horizontal and vertical acceleration, velocity, and displacement. We compare the observed envelopes with the ones predicted by Cua & Heaton's envelope ground motion prediction equations (Cua 2005). We define a "test function" as the logarithm of the ratio between observed and predicted envelopes at every second in real-time. Once the envelopes deviate beyond an acceptable threshold, we declare a misfit. Kurtosis and skewness of a time evolving test function are used to rapidly identify a misfit. Real-time kurtosis and skewness calculations are also inputs to both probabilistic (Logistic Regression and Bayesian Logistic Regression) and nonprobabilistic (Least Squares and Linear Discriminant Analysis) models that ultimately decide if there is an unacceptable level of misfit. This algorithm is designed to work at a wide range of amplitude scales. When tested with synthetic and actual seismic signals from past events, it works for both small and large events.
Resumo:
Sufficient conditions are derived for the validity of approximate periodic solutions of a class of second order ordinary nonlinear differential equations. An approximate solution is defined to be valid if an exact solution exists in a neighborhood of the approximation.
Two classes of validity criteria are developed. Existence is obtained using the contraction mapping principle in one case, and the Schauder-Leray fixed point theorem in the other. Both classes of validity criteria make use of symmetry properties of periodic functions, and both classes yield an upper bound on a norm of the difference between the approximate and exact solution. This bound is used in a procedure which establishes sufficient stability conditions for the approximated solution.
Application to a system with piecewise linear restoring force (bilinear system) reveals that the approximate solution obtained by the method of averaging is valid away from regions where the response exhibits vertical tangents. A narrow instability region is obtained near one-half the natural frequency of the equivalent linear system. Sufficient conditions for the validity of resonant solutions are also derived, and two term harmonic balance approximate solutions which exhibit ultraharmonic and subharmonic resonances are studied.
Resumo:
Glaciers are often assumed to deform only at slow (i.e., glacial) rates. However, with the advent of high rate geodetic observations of ice motion, many of the intricacies of glacial deformation on hourly and daily timescales have been observed and quantified. This thesis explores two such short timescale processes: the tidal perturbation of ice stream motion and the catastrophic drainage of supraglacial meltwater lakes. Our investigation into the transmission length-scale of a tidal load represents the first study to explore the daily tidal influence on ice stream motion using three-dimensional models. Our results demonstrate both that the implicit assumptions made in the standard two-dimensional flow-line models are inherently incorrect for many ice streams, and that the anomalously large spatial extent of the tidal influence seen on the motion of some glaciers cannot be explained, as previously thought, through the elastic or viscoelastic transmission of tidal loads through the bulk of the ice stream. We then discuss how the phase delay between a tidal forcing and the ice stream’s displacement response can be used to constrain in situ viscoelastic properties of glacial ice. Lastly, for the problem of supraglacial lake drainage, we present a methodology for implementing linear viscoelasticity into an existing model for lake drainage. Our work finds that viscoelasticity is a second-order effect when trying to model the deformation of ice in response to a meltwater lake draining to a glacier’s bed. The research in this thesis demonstrates that the first-order understanding of the short-timescale behavior of naturally occurring ice is incomplete, and works towards improving our fundamental understanding of ice behavior over the range of hours to days.
Resumo:
The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.
Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.
Resumo:
In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.
For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.
To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.
The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.
The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.
Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.
We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.
Resumo:
The Northridge earthquake of January 17, 1994, highlighted the two previously known problems of premature fracturing of connections and the damaging capabilities of near-source ground motion pulses. Large ground motions had not been experienced in a city with tall steel moment-frame buildings before. Some steel buildings exhibited fracture of welded connections or other types of structural degradation.
A sophisticated three-dimensional nonlinear inelastic program is developed that can accurately model many nonlinear properties commonly ignored or approximated in other programs. The program can assess and predict severely inelastic response of steel buildings due to strong ground motions, including collapse.
Three-dimensional fiber and segment discretization of elements is presented in this work. This element and its two-dimensional counterpart are capable of modeling various geometric and material nonlinearities such as moment amplification, spread of plasticity and connection fracture. In addition to introducing a three-dimensional element discretization, this work presents three-dimensional constraints that limit the number of equations required to solve various three-dimensional problems consisting of intersecting planar frames.
Two buildings damaged in the Northridge earthquake are investigated to verify the ability of the program to match the level of response and the extent and location of damage measured. The program is used to predict response of larger near-source ground motions using the properties determined from the matched response.
A third building is studied to assess three-dimensional effects on a realistic irregular building in the inelastic range of response considering earthquake directivity. Damage levels are observed to be significantly affected by directivity and torsional response.
Several strong recorded ground motions clearly exceed code-based levels. Properly designed buildings can have drifts exceeding code specified levels due to these ground motions. The strongest ground motions caused collapse if fracture was included in the model. Near-source ground displacement pulses can cause columns to yield prior to weaker-designed beams. Damage in tall buildings correlates better with peak-to-peak displacements than with peak-to-peak accelerations.
Dynamic response of tall buildings shows that higher mode response can cause more damage than first mode response. Leaking of energy between modes in conjunction with damage can cause torsional behavior that is not anticipated.
Various response parameters are used for all three buildings to determine what correlations can be made for inelastic building response. Damage levels can be dramatically different based on the inelastic model used. Damage does not correlate well with several common response parameters.
Realistic modeling of material properties and structural behavior is of great value for understanding the performance of tall buildings due to earthquake excitations.
Resumo:
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.
As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.
Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.
Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
Resumo:
Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.
Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.
In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.
In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.
Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.
One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.
By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.
Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.
Resumo:
The cross sections for the two antiproton-proton annihilation-in-flight modes,
ˉp + p → π+ + π-
ˉp + p → k+ + k-
were measured for fifteen laboratory antiproton beam momenta ranging from 0.72 to 2.62 GeV/c. No magnets were used to determine the charges in the final state. As a result, the angular distributions were obtained in the form [dσ/dΩ (ΘC.M.) + dσ/dΩ (π – ΘC.M.)] for 45 ≲ ΘC.M. ≲ 135°.
A hodoscope-counter system was used to discriminate against events with final states having more than two particles and antiproton-proton elastic scattering events. One spark chamber was used to record the track of each of the two charged final particles. A total of about 40,000 pictures were taken. The events were analyzed by measuring the laboratory angle of the track in each chamber. The value of the square of the mass of the final particles was calculated for each event assuming the reaction
ˉp + p → a pair of particles with equal masses.
About 20,000 events were found to be either annihilation into π ±-pair or k ±-pair events. The two different charged meson pair modes were also distinctly separated.
The average differential cross section of ˉp + p → π+ + π- varied from ~ 25 µb/sr at antiproton beam momentum 0.72 GeV/c (total energy in center-of-mass system, √s = 2.0 GeV) to ~ 2 µb/sr at beam momentum 2.62 GeV/c (√s = 2.64 GeV). The most striking feature in the angular distribution was a peak at ΘC.M. = 90° (cos ΘC.M. = 0) which increased with √s and reached a maximum at √s ~ 2.1 GeV (beam momentum ~ 1.1 GeV/c). Then it diminished and seemed to disappear completely at √s ~ 2.5 GeV (beam momentum ~ 2.13 GeV/c). A valley in the angular distribution occurred at cos ΘC.M. ≈ 0.4. The differential cross section then increased as cos ΘC.M. approached 1.
The average differential cross section for ˉp + p → k+ + k- was about one third of that of the π±-pair mode throughout the energy range of this experiment. At the lower energies, the angular distribution, unlike that of the π±-pair mode, was quite isotropic. However, a peak at ΘC.M. = 90° seemed to develop at √s ~ 2.37 GeV (antiproton beam momentum ~ 1.82 GeV/c). No observable change was seen at that energy in the π±-pair cross section.
The possible connection of these features with the observed meson resonances at 2.2 GeV and 2.38 GeV, and its implications, were discussed.
Resumo:
This thesis advances our physical understanding of the sensitivity of the hydrological cycle to global warming. Specifically, it focuses on changes in the longitudinal (zonal) variation of precipitation minus evaporation (P - E), which is predominantly controlled by planetary-scale stationary eddies. By studying idealized general circulation model (GCM) experiments with zonally varying boundary conditions, this thesis examines the mechanisms controlling the strength of stationary-eddy circulations and their role in the hydrological cycle. The overarching goal of this research is to understand the cause of changes in regional P - E with global warming. An understanding of such changes can be useful for impact studies focusing on water availability, ecosystem management, and flood risk.
Based on a moisture-budget analysis of ERA-Interim data, we establish an approximation for zonally anomalous P - E in terms of surface moisture content and stationary-eddy vertical motion in the lower troposphere. Part of the success of this approximation comes from our finding that transient-eddy moisture fluxes partially cancel the effect of stationary-eddy moisture advection, allowing divergent circulations to dominate the moisture budget. The lower-tropospheric vertical motion is related to horizontal motion in stationary eddies by Sverdrup and Ekman balance. These moisture- and vorticity-budget balances also hold in idealized and comprehensive GCM simulations across a range of climates.
By examining climate changes in the idealized and comprehensive GCM simulations, we are able to show the utility of the vertical motion P - E approximation for splitting changes in zonally anomalous P - E into thermodynamic and dynamic components. Shifts in divergent stationary-eddy circulations dominate changes in zonally anomalous P - E. This limits the local utility of the "wet gets wetter, dry gets drier” idea, where existing P - E patterns are amplified with warming by the increase in atmospheric moisture content, with atmospheric circulations held fixed. The increase in atmospheric moisture content manifests instead in an increase in the amplitude of the zonally anomalous hydrological cycle as measured by the zonal variance of P - E. However, dynamic changes, particularly the slowdown of divergent stationary-eddy circulations, limit the strengthening of the zonally anomalous hydrological cycle. In certain idealized cases, dynamic changes are even strong enough to reverse the tendency towards "wet gets wetter, dry gets drier” with warming.
Motivated by the importance of stationary-eddy vertical velocities in the moisture budget analysis, we examine controls on the amplitude of stationary eddies across a wide range of climates in an idealized GCM with simple topographic and ocean-heating zonal asymmetries. An analysis of the thermodynamic equation in the vicinity of topographic forcing reveals the importance of on-slope surface winds, the midlatitude isentropic slope, and latent heating in setting the amplitude of stationary waves. The response of stationary eddies to climate change is determined primarily by the strength of zonal surface winds hitting the mountain. The sensitivity of stationary-eddies to this surface forcing increases with climate change as the slope of midlatitude isentropes decreases. However, latent heating also plays an important role in damping the stationary-eddy response, and this damping becomes stronger with warming as the atmospheric moisture content increases. We find that the response of tropical overturning circulations forced by ocean heat-flux convergence is described by changes in the vertical structure of moist static energy and deep convection. This is used to derive simple scalings for the Walker circulation strength that capture the monotonic decrease with warming found in our idealized simulations.
Through the work of this thesis, the advances made in understanding the amplitude of stationary-waves in a changing climate can be directly applied to better understand and predict changes in the zonally anomalous hydrological cycle.