20 resultados para Simulate

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transcranial magnetic stimulation (TMS) is a technique that stimulates the brain using a magnetic coil placed on the scalp. Since it is applicable to humans non-invasively, directly interfering with neural electrical activity, it is potentially a good tool to study the direct relationship between perceptual experience and neural activity. However, it has been difficult to produce a clear perceptible phenomenon with TMS of sensory areas, especially using a single magnetic pulse. Also, the biophysical mechanisms of magnetic stimulation of single neurons have been poorly understood.

In the psychophysical part of this thesis, perceptual phenomena induced by TMS of the human visual cortex are demonstrated as results of the interactions with visual inputs. We first introduce a method to create a hole, or a scotoma, in a flashed, large-field visual pattern using single-pulse TMS. Spatial aspects of the interactions are explored using the distortion effect of the scotoma depending on the visual pattern, which can be luminance-defined or illusory. Its similarity to the distortion of afterimages is also discussed. Temporal interactions are demonstrated in the filling-in of the scotoma with temporally adjacent visual features, as well as in the effective suppression of transient visual features. Also, paired-pulse TMS is shown to lead to different brightness modulations in transient and sustained visual stimuli.

In the biophysical part, we first develop a biophysical theory to simulate the effect of magnetic stimulation on arbitrary neuronal structure. Computer simulations are performed on cortical neuron models with realistic structure and channels, combined with the current injection that simulates magnetic stimulation. The simulation results account for general and basic characteristics of the macroscopic effects of TMS including our psychophysical findings, such as a long inhibitory effect, dependence on the background activity, and dependence on the direction of the induced electric field.

The perceptual effects and the cortical neuron model presented here provide foundations for the study of the relationship between perception and neural activity. Further insights would be obtained from extension of our model to neuronal networks and psychophysical studies based on predictions of the biophysical model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of this thesis a study of the effect of the longitudinal distribution of optical intensity and electron density on the static and dynamic behavior of semiconductor lasers is performed. A static model for above threshold operation of a single mode laser, consisting of multiple active and passive sections, is developed by calculating the longitudinal optical intensity distribution and electron density distribution in a self-consistent manner. Feedback from an index and gain Bragg grating is included, as well as feedback from discrete reflections at interfaces and facets. Longitudinal spatial holeburning is analyzed by including the dependence of the gain and the refractive index on the electron density. The mechanisms of spatial holeburning in quarter wave shifted DFB lasers are analyzed. A new laser structure with a uniform optical intensity distribution is introduced and an implementation is simulated, resulting in a large reduction of the longitudinal spatial holeburning effect.

A dynamic small-signal model is then developed by including the optical intensity and electron density distribution, as well as the dependence of the grating coupling coefficients on the electron density. Expressions are derived for the intensity and frequency noise spectrum, the spontaneous emission rate into the lasing mode, the linewidth enhancement factor, and the AM and FM modulation response. Different chirp components are identified in the FM response, and a new adiabatic chirp component is discovered. This new adiabatic chirp component is caused by the nonuniform longitudinal distributions, and is found to dominate at low frequencies. Distributed feedback lasers with partial gain coupling are analyzed, and it is shown how the dependence of the grating coupling coefficients on the electron density can result in an enhancement of the differential gain with an associated enhancement in modulation bandwidth and a reduction in chirp.

In the second part, spectral characteristics of passively mode-locked two-section multiple quantum well laser coupled to an external cavity are studied. Broad-band wavelength tuning using an external grating is demonstrated for the first time in passively mode-locked semiconductor lasers. A record tuning range of 26 nm is measured, with pulse widths of typically a few picosecond and time-bandwidth products of more than 10 times the transform limit. It is then demonstrated that these large time-bandwidth products are due to a strong linear upchirp, by performing pulse compression by a factor of 15 to a record pulse widths as low 320 fs.

A model for pulse propagation through a saturable medium with self-phase-modulation, due to the a-parameter, is developed for quantum well material, including the frequency dependence of the gain medium. This model is used to simulate two-section devices coupled to an external cavity. When no self-phase-modulation is present, it is found that the pulses are asymmetric with a sharper rising edge, that the pulse tails have an exponential behavior, and that the transform limit is 0.3. Inclusion of self-phase-modulation results in a linear upchirp imprinted on the pulse after each round-trip. This linear upchirp is due to a combination of self-phase-modulation in a gain section and absorption of the leading edge of the pulse in the saturable absorber.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.

In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.

In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.

In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.

The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.

The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.

Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.

Source listings for all computer programs are given in the appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proton transfer reactions at the interface of water with hydrophobic media, such as air or lipids, are ubiquitous on our planet. These reactions orchestrate a host of vital phenomena in the environment including, for example, acidification of clouds, enzymatic catalysis, chemistries of aerosol and atmospheric gases, and bioenergetic transduction. Despite their importance, however, quantitative details underlying these interactions have remained unclear. Deeper insight into these interfacial reactions is also required in addressing challenges in green chemistry, improved water quality, self-assembly of materials, the next generation of micro-nanofluidics, adhesives, coatings, catalysts, and electrodes. This thesis describes experimental and theoretical investigation of proton transfer reactions at the air-water interface as a function of hydration gradients, electrochemical potential, and electrostatics. Since emerging insights hold at the lipid-water interface as well, this work is also expected to aid understanding of complex biological phenomena associated with proton migration across membranes.

Based on our current understanding, it is known that the physicochemical properties of the gas-phase water are drastically different from those of bulk water. For example, the gas-phase hydronium ion, H3O+(g), can protonate most (non-alkane) organic species, whereas H3O+(aq) can neutralize only relatively strong bases. Thus, to be able to understand and engineer water-hydrophobe interfaces, it is imperative to investigate this fluctuating region of molecular thickness wherein the ‘function’ of chemical species transitions from one phase to another via steep gradients in hydration, dielectric constant, and density. Aqueous interfaces are difficult to approach by current experimental techniques because designing experiments to specifically sample interfacial layers (< 1 nm thick) is an arduous task. While recent advances in surface-specific spectroscopies have provided valuable information regarding the structure of aqueous interfaces, but structure alone is inadequate to decipher the function. By similar analogy, theoretical predictions based on classical molecular dynamics have remained limited in their scope.

Recently, we have adapted an analytical electrospray ionization mass spectrometer (ESIMS) for probing reactions at the gas-liquid interface in real time. This technique is direct, surface-specific,and provides unambiguous mass-to-charge ratios of interfacial species. With this innovation, we have been able to investigate the following:

1. How do anions mediate proton transfers at the air-water interface?

2. What is the basis for the negative surface potential at the air-water interface?

3. What is the mechanism for catalysis ‘on-water’?

In addition to our experiments with the ESIMS, we applied quantum mechanics and molecular dynamics to simulate our experiments toward gaining insight at the molecular scale. Our results unambiguously demonstrated the role of electrostatic-reorganization of interfacial water during proton transfer events. With our experimental and theoretical results on the ‘superacidity’ of the surface of mildly acidic water, we also explored implications on atmospheric chemistry and green chemistry. Our most recent results explained the basis for the negative charge of the air-water interface and showed that the water-hydrophobe interface could serve as a site for enhanced autodissociation of water compared to the condensed phase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an experimental investigation of the axisymmetric heat transfer from a small scale fire and resulting buoyant plume to a horizontal, unobstructed ceiling during the initial stages of development. A propane-air burner yielding a heat source strength between 1.0 kW and 1.6 kW was used to simulate the fire, and measurements proved that this heat source did satisfactorily represent a source of buoyancy only. The ceiling consisted of a 1/16" steel plate of 0.91 m. diameter, insulated on the upper side. The ceiling height was adjustable between 0.5 m and 0.91 m. Temperature measurements were carried out in the plume, ceiling jet, and on the ceiling.

Heat transfer data were obtained by using the transient method and applying corrections for the radial conduction along the ceiling and losses through the insulation material. The ceiling heat transfer coefficient was based on the adiabatic ceiling jet temperature (recovery temperature) reached after a long time. A parameter involving the source strength Q and ceiling height H was found to correlate measurements of this temperature and its radial variation. A similar parameter for estimating the ceiling heat transfer coefficient was confirmed by the experimental results.

This investigation therefore provides reasonable estimates for the heat transfer from a buoyant gas plume to a ceiling in the axisymmetric case, for the stagnation region where such heat transfer is a maximum and for the ceiling jet region (r/H ≤ 0.7). A comparison with data from experiments which involved larger heat sources indicates that the predicted scaling of temperatures and heat transfer rates for larger scale fires is adequate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.

1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.

2) MHD turbulence is anisotropic with energy cascading more rapidly along k than along k, where k and k refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k such that excited modes are confined inside a cone bounded by k ∝ kγ where γ less than 1. The opening angle of the cone, θ(k) ∝ k-(1-γ), defines the scale dependent anisotropy.

3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k)≪1.

4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.

5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.

6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.

7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.

8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k) ∝ k-∝, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.

A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.

The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.

We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic rupture simulations are unique in their contributions to the study of earthquake physics. The current rapid development of dynamic rupture simulations poses several new questions: Do the simulations reflect the real world? Do the simulations have predictive power? Which one should we believe when the simulations disagree? This thesis illustrates how integration with observations can help address these questions and reduce the effects of non-uniqueness of both dynamic rupture simulations and kinematic inversion problems. Dynamic rupture simulations with observational constraints can effectively identify non-physical features inferred from observations. Moreover, the integrative technique can also provide more physical insights into the mechanisms of earthquakes. This thesis demonstrates two examples of such kinds of integration: dynamic rupture simulations of the Mw 9.0 2011 Tohoku-Oki earthquake and of earthquake ruptures in damaged fault zones:

(1) We develop simulations of the Tohoku-Oki earthquake based on a variety of observations and minimum assumptions of model parameters. The simulations provide realistic estimations of stress drop and fracture energy of the region and explain the physical mechanisms of high-frequency radiation in the deep region. We also find that the overridding subduction wedge contributes significantly to the up-dip rupture propagation and large final slip in the shallow region. Such findings are also applicable to other megathrust earthquakes.

(2) Damaged fault zones are usually found around natural faults, but their effects on earthquake ruptures have been largely unknown. We simulate earthquake ruptures in damaged fault zones with material properties constrained by seismic and geological observations. We show that reflected waves in fault zones are effective at generating pulse-like ruptures and head waves tend to accelerate and decelerate rupture speeds. These mechanisms are robust in natural fault zones with large attenuation and off-fault plasticity. Moreover, earthquakes in damaged fault zones can propagate at super-Rayleigh speeds that are unstable in homogeneous media. Supershear transitions in fault zones do not require large fault stresses. In the end, we present observations in the Big Bear region, where variability of rupture speeds of small earthquakes correlates with the laterally variable materials in a damaged fault zone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shockwave lithotripsy is a noninvasive medical procedure wherein shockwaves are repeatedly focused at the location of kidney stones in order to pulverize them. Stone comminution is thought to be the product of two mechanisms: the propagation of stress waves within the stone and cavitation erosion. However, the latter mechanism has also been implicated in vascular injury. In the present work, shock-induced bubble collapse is studied in order to understand the role that it might play in inducing vascular injury. A high-order accurate, shock- and interface-capturing numerical scheme is developed to simulate the three-dimensional collapse of the bubble in both the free-field and inside a vessel phantom. The primary contributions of the numerical study are the characterization of the shock-bubble and shock-bubble-vessel interactions across a large parameter space that includes clinical shockwave lithotripsy pressure amplitudes, problem geometry and tissue viscoelasticity, and the subsequent correlation of these interactions to vascular injury. Specifically, measurements of the vessel wall pressures and displacements, as well as the finite strains in the fluid surrounding the bubble, are utilized with available experiments in tissue to evaluate damage potential. Estimates are made of the smallest injurious bubbles in the microvasculature during both the collapse and jetting phases of the bubble's life cycle. The present results suggest that bubbles larger than 1 μm in diameter could rupture blood vessels under clinical SWL conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.