21 resultados para piracy propagation monitoring
em CaltechTHESIS
Resumo:
In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.
To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.
In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.
Resumo:
This paper is in two parts. In the first part we give a qualitative study of wave propagation in an inhomogeneous medium principally by geometrical optics and ray theory. The inhomogeneity is represented by a sound-speed profile which is dependent upon one coordinate, namely the depth; and we discuss the general characteristics of wave propagation which result from a source placed on the sound channel axis. We show that our mathematical model of the sound- speed in the ocean actually predicts some of the behavior of the observed physical phenomena in the underwater sound channel. Using ray theoretic techniques we investigate the implications of our profile on the following characteristics of SOFAR propagation: (i) the sound energy traveling further away from the axis takes less time to travel from source to receiver than sound energy traveling closer to the axis, (ii) the focusing of sound energy in the sound channel at certain ranges, (iii) the overall ray picture in the sound channel.
In the second part a more penetrating quantitative study is done by means of analytical techniques on the governing equations. We study the transient problem for the Epstein profile by employing a double transform to formally derive an integral representation for the acoustic pressure amplitude, and from this representation we obtain several alternative representations. We study the case where both source and receiver are on the channel axis and greatly separated. In particular we verify some of the earlier results derived by ray theory and obtain asymptotic results for the acoustic pressure in the far-field.
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.
Resumo:
We study the fundamental dynamic behavior of a special class of ordered granular systems in order to design new, structured materials with unique physical properties. The dynamic properties of granular systems are dictated by the nonlinear, Hertzian, potential in compression and zero tensile strength resulting from the discrete material structure. Engineering the underlying particle arrangement of granular systems allows for unique dynamic properties, not observed in natural, disordered granular media. While extensive studies on 1D granular crystals have suggested their usefulness for a variety of engineering applications, considerably less attention has been given to higher-dimensional systems. The extension of these studies in higher dimensions could enable the discovery of richer physical phenomena not possible in 1D, such as spatial redirection and anisotropic energy trapping. We present experiments, numerical simulation (based on a discrete particle model), and in some cases theoretical predictions for several engineered granular systems, studying the effects of particle arrangement on the highly nonlinear transient wave propagation to develop means for controlling the wave propagation pathways. The first component of this thesis studies the stress wave propagation resulting from a localized impulsive loading for three different 2D particle lattice structures: square, centered square, and hexagonal granular crystals. By varying the lattice structure, we observe a wide range of properties for the propagating stress waves: quasi-1D solitary wave propagation, fully 2D wave propagation with tunable wave front shapes, and 2D pulsed wave propagation. Additionally the effects of weak disorder, inevitably present in real granular systems, are investigated. The second half of this thesis studies the solitary wave propagation through 2D and 3D ordered networks of granular chains, reducing the effective density compared to granular crystals by selectively placing wave guiding chains to control the acoustic wave transmission. The rapid wave front amplitude decay exhibited by these granular networks makes them highly attractive for impact mitigation applications. The agreement between experiments, numerical simulations, and applicable theoretical predictions validates the wave guiding capabilities of these engineered granular crystals and networks and opens a wide range of possibilities for the realization of increasingly complex granular material design.
Resumo:
Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.
Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.
In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.
This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.
Resumo:
A Bayesian probabilistic methodology for on-line structural health monitoring which addresses the issue of parameter uncertainty inherent in problem is presented. The method uses modal parameters for a limited number of modes identified from measurements taken at a restricted number of degrees of freedom of a structure as the measured structural data. The application presented uses a linear structural model whose stiffness matrix is parameterized to develop a class of possible models. Within the Bayesian framework, a joint probability density function (PDF) for the model stiffness parameters given the measured modal data is determined. Using this PDF, the marginal PDF of the stiffness parameter for each substructure given the data can be calculated.
Monitoring the health of a structure using these marginal PDFs involves two steps. First, the marginal PDF for each model parameter given modal data from the undamaged structure is found. The structure is then periodically monitored and updated marginal PDFs are determined. A measure of the difference between the calibrated and current marginal PDFs is used as a means to characterize the health of the structure. A procedure for interpreting the measure for use by an expert system in on-line monitoring is also introduced.
The probabilistic framework is developed in order to address the model parameter uncertainty issue inherent in the health monitoring problem. To illustrate this issue, consider a very simplified deterministic structural health monitoring method. In such an approach, the model parameters which minimize an error measure between the measured and model modal values would be used as the "best" model of the structure. Changes between the model parameters identified using modal data from the undamaged structure and subsequent modal data would be used to find the existence, location and degree of damage. Due to measurement noise, limited modal information, and model error, the "best" model parameters might vary from one modal dataset to the next without any damage present in the structure. Thus, difficulties would arise in separating normal variations in the identified model parameters based on limitations of the identification method and variations due to true change in the structure. The Bayesian framework described in this work provides a means to handle this parametric uncertainty.
The probabilistic health monitoring method is applied to simulated data and laboratory data. The results of these tests are presented.
Resumo:
Arid and semiarid landscapes comprise nearly a third of the Earth's total land surface. These areas are coming under increasing land use pressures. Despite their low productivity these lands are not barren. Rather, they consist of fragile ecosystems vulnerable to anthropogenic disturbance.
The purpose of this thesis is threefold: (I) to develop and test a process model of wind-driven desertification, (II) to evaluate next-generation process-relevant remote monitoring strategies for use in arid and semiarid regions, and (III) to identify elements for effective management of the world's drylands.
In developing the process model of wind-driven desertification in arid and semiarid lands, field, remote sensing, and modeling observations from a degraded Mojave Desert shrubland are used. This model focuses on aeolian removal and transport of dust, sand, and litter as the primary mechanisms of degradation: killing plants by burial and abrasion, interrupting natural processes of nutrient accumulation, and allowing the loss of soil resources by abiotic transport. This model is tested in field sampling experiments at two sites and is extended by Fourier Transform and geostatistical analysis of high-resolution imagery from one site.
Next, the use of hyperspectral remote sensing data is evaluated as a substantive input to dryland remote monitoring strategies. In particular, the efficacy of spectral mixture analysis (SMA) in discriminating vegetation and soil types and detennining vegetation cover is investigated. The results indicate that hyperspectral data may be less useful than often thought in determining vegetation parameters. Its usefulness in determining soil parameters, however, may be leveraged by developing simple multispectral classification tools that can be used to monitor desertification.
Finally, the elements required for effective monitoring and management of arid and semiarid lands are discussed. Several large-scale multi-site field experiments are proposed to clarify the role of wind as a landscape and degradation process in dry lands. The role of remote sensing in monitoring the world's drylands is discussed in terms of optimal remote sensing platform characteristics and surface phenomena which may be monitored in order to identify areas at risk of desertification. A desertification indicator is proposed that unifies consideration of environmental and human variables.
Resumo:
This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.
Resumo:
The propagation of the fast magnetosonic wave in a tokamak plasma has been investigated at low power, between 10 and 300 watts, as a prelude to future heating experiments.
The attention of the experiments has been focused on the understanding of the coupling between a loop antenna and a plasma-filled cavity. Special emphasis has been given to the measurement of the complex loading impedance of the plasma. The importance of this measurement is that once the complex loading impedance of the plasma is known, a matching network can be designed so that the r.f. generator impedance can be matched to one of the cavity modes, thus delivering maximum power to the plasma. For future heating experiments it will be essential to be able to match the generator impedance to a cavity mode in order to couple the r.f. energy efficiently to the plasma.
As a consequence of the complex impedance measurements, it was discovered that the designs of the transmitting antenna and the impedance matching network are both crucial. The losses in the antenna and the matching network must be kept below the plasma loading in order to be able to detect the complex plasma loading impedance. This is even more important in future heating experiments, because the fundamental basis for efficient heating before any other consideration is to deliver more energy into the plasma than is dissipated in the antenna system.
The characteristics of the magnetosonic cavity modes are confirmed by three different methods. First, the cavity modes are observed as voltage maxima at the output of a six-turn receiving probe. Second, they also appear as maxima in the input resistance of the transmitting antenna. Finally, when the real and imaginary parts of the measured complex input impedance of the antenna are plotted in the complex impedance plane, the resulting curves are approximately circles, indicating a resonance phenomenon.
The observed plasma loading resistances at the various cavity modes are as high as 3 to 4 times the basic antenna resistance (~ .4 Ω). The estimated cavity Q’s were between 400 and 700. This means that efficient energy coupling into the tokamak and low losses in the antenna system are possible.
Resumo:
Part I. Novel composite polyelectrolyte materials were developed that exhibit desirable charge propagation and ion-retention properties. The morphology of electrode coatings cast from these materials was shown to be more important for its electrochemical behavior than its chemical composition.
Part II. The Wilhelmy plate technique for measuring dynamic surface tension was extended to electrified liquid-liquid interphases. The dynamical response of the aqueous NaF-mercury electrified interphase was examined by concomitant measurement of surface tension, current, and applied electrostatic potential. Observations of the surface tension response to linear sweep voltammetry and to step function perturbations in the applied electrostatic potential (e.g., chronotensiometry) provided strong evidence that relaxation processes proceed for time-periods that are at least an order of magnitude longer than the time periods necessary to establish diffusion equilibrium. The dynamical response of the surface tension is analyzed within the context of non-equilibrium thermodynamics and a kinetic model that requires three simultaneous first order processes.
Resumo:
The 1-6 MeV electron flux at 1 AU has been measured for the time period October 1972 to December 1977 by the Caltech Electron/Isotope Spectrometers on the IMP-7 and IMP-8 satellites. The non-solar interplanetary electron flux reported here covered parts of five synodic periods. The 88 Jovian increases identified in these five synodic periods were classified by their time profiles. The fall time profiles were consistent with an exponential fall with τ ≈ 4-9 days. The rise time profiles displayed a systematic variation over the synodic period. Exponential rise time profiles with τ ≈ 1-3 days tended to occur in the time period before nominal connection, diffusive profiles predicted by the convection-diffusion model around nominal connection, and abrupt profiles after nominal connection.
The times of enhancements in the magnetic field, │B│, at 1 AU showed a better correlation than corotating interaction regions (CIR's) with Jovian increases and other changes in the electron flux at 1 AU, suggesting that │B│ enhancements indicate the times that barriers to electron propagation pass Earth. Time sequences of the increases and decreases in the electron flux at 1 AU were qualitatively modeled by using the times that CIR's passed Jupiter and the times that │B│ enhancements passed Earth.
The electron data observed at 1 AU were modeled by using a convection-diffusion model of Jovian electron propagation. The synodic envelope formed by the maxima of the Jovian increases was modeled by the envelope formed by the predicted intensities at a time less than that needed to reach equilibrium. Even though the envelope shape calculated in this way was similar to the observed envelope, the required diffusion coefficients were not consistent with a diffusive process.
Three Jovian electron increases at 1 AU for the 1974 synodic period were fit with rise time profiles calculated from the convection-diffusion model. For the fits without an ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 1.0 - 2.5 x 1021 cm2/sec and ky = 1.6 - 2.0 x 1022 cm2/sec. For the fits that included the ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 0.4 - 1.0 x 1021 cm2/sec and ky = 0.8 - 1.3 x 1022 cm2/sec.
Resumo:
The specific high energy and power capacities of rechargeable lithium metal (Li0) batteries are ideally suited to portable devices and are valuable as storage units for intermittent renewable energy sources. Lithium, the lightest and most electropositive metal, would be the optimal anode material for rechargeable batteries if it were not for the fact that such devices fail unexpectedly by short-circuiting via the dendrites that grow across electrodes upon recharging. This phenomenon poses a major safety issue because it triggers a series of adverse events that start with overheating, potentially followed by the thermal decomposition and ultimately the ignition of the organic solvents used in such devices.
In this thesis, we developed experimental platform for monitoring and quantifying the dendrite populations grown in a Li battery prototype upon charging under various conditions. We explored the effects of pulse charging in the kHz range and temperature on dendrite growth, and also on loss capacity into detached “dead” lithium particles.
Simultaneously, we developed a computational framework for understanding the dynamics of dendrite propagation. The coarse-grained Monte Carlo model assisted us in the interpretation of pulsing experiments, whereas MD calculations provided insights into the mechanism of dendrites thermal relaxation. We also developed a computational framework for measuring the dead lithium crystals from the experimental images.
Resumo:
Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.
Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.
An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.
A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.
Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.
An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".
Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.
What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.
Resumo:
The propagation of cosmic rays through interstellar space has been investigated with the view of determining what particles can traverse astronomical distances without serious loss of energy. The principal method of loss of energy of high energy particles is by interaction with radiation. It is found that high energy (1013-1018ev) electrons drop to one-tenth their energy in 108 light years in the radiation density in the galaxy and that protons are not significantly affected in this distance. The origin of the cosmic rays is not known so that various hypotheses as to their origin are examined. If the source is near a star it is found that the interaction of electrons and photons with the stellar radiation field and the interaction of electrons with the stellar magnetic field limit the amount of energy which these particles can carry away from the star. However, the interaction is not strong enough to affect the energy of protons or light nuclei appreciably. The chief uncertainty in the results is due to the possible existence of general galactic magnetic field. The main conclusion reached is that if there is a general galactic magnetic field, then the primary spectrum has very few photons, only low energy (˂ 1013 ev) electrons and the higher energy particles are primarily protons regardless of the source mechanism, and if there is no general galactic magnetic field, then the source of cosmic rays accelerates mainly protons and the present rate of production is much less than that in the past.