9 resultados para tense and aspect

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.

The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.

Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.

In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.

In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.

Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first thesis topic is a perturbation method for resonantly coupled nonlinear oscillators. By successive near-identity transformations of the original equations, one obtains new equations with simple structure that describe the long time evolution of the motion. This technique is related to two-timing in that secular terms are suppressed in the transformation equations. The method has some important advantages. Appropriate time scalings are generated naturally by the method, and don't need to be guessed as in two-timing. Furthermore, by continuing the procedure to higher order, one extends (formally) the time scale of valid approximation. Examples illustrate these claims. Using this method, we investigate resonance in conservative, non-conservative and time dependent problems. Each example is chosen to highlight a certain aspect of the method.

The second thesis topic concerns the coupling of nonlinear chemical oscillators. The first problem is the propagation of chemical waves of an oscillating reaction in a diffusive medium. Using two-timing, we derive a nonlinear equation that determines how spatial variations in the phase of the oscillations evolves in time. This result is the key to understanding the propagation of chemical waves. In particular, we use it to account for certain experimental observations on the Belusov-Zhabotinskii reaction.

Next, we analyse the interaction between a pair of coupled chemical oscillators. This time, we derive an equation for the phase shift, which measures how much the oscillators are out of phase. This result is the key to understanding M. Marek's and I. Stuchl's results on coupled reactor systems. In particular, our model accounts for synchronization and its bifurcation into rhythm splitting.

Finally, we analyse large systems of coupled chemical oscillators. Using a continuum approximation, we demonstrate mechanisms that cause auto-synchronization in such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interleukin-2 is one of the lymphokines secreted by T helper type 1 cells upon activation mediated by T-cell receptor (TCR) and accessory molecules. The ability to express IL-2 is correlated with T-lineage commitment and is regulated during T cell development and differentiation. Understanding the molecular mechanism of how IL-2 gene inducibility is controlled at each transition and each differentiation process of T-cell development is to understand one aspect of T-cell development. In the present study, we first attempted to elucidate the molecular basis for the developmental changes of IL-2 gene inducibility. We showed that IL-2 gene inducibility is acquired early in immature CD4- CD8-TCR- thymocytes prior to TCR gene rearrangement. Similar to mature T cells, a complete set of transcription factors can be induced at this early stage to activate IL-2 gene expression. The progression of these cells to cortical CD4^+CD8^+TCR^(1o) cells is accompanied by the loss of IL-2 gene inducibility. We demonstrated that DNA binding activities of two transcription factors AP-1 and NF-AT are reduced in cells at this stage. Further, the loss of factor binding, especially AP-1, is attributable to the reduced ability to activate expression of three potential components of AP-1 and NF-AT, including c-Fos, FosB, and Fra-2. We next examined the interaction of transcription factors and the IL-2 promoter in vivo by using the EL4 T cell line and two non-T cell lines. We showed an all-or-none phenomenon regarding the factor-DNA interaction, i.e., in activated T cells, the IL-2 promoter is occupied by sequence-specific transcription factors when all the transcription factors are available; in resting T cells or non-T cells, no specific protein-DNA interaction is observed when only a subset of factors are present in the nuclei. Purposefully reducing a particular set of factor binding activities in stimulated T cells using pharmacological agents cyclosporin A or forskolin also abolished all interactions. The results suggest that a combinatorial and coordinated protein-DNA interaction is required for IL-2 gene activation. The thymocyte experiments clearly illustrated that multiple transcription factors are regulated during intrathymic T-cell development, and this regulation in tum controls the inducibility of the lineage-specific IL-2 gene. The in vivo study of protein-DNA interaction stressed the combinatorial action of transcription factors to stably occupy the IL-2 promoter and to initiate its transcription, and provided a molecular mechanism for changes in IL-2 gene inducibility in T cells undergoing integration of multiple environmental signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundamental studies of magnetic alignment of highly anisotropic mesostructures can enable the clean-room-free fabrication of flexible, array-based solar and electronic devices, in which preferential orientation of nano- or microwire-type objects is desired. In this study, ensembles of 100 micron long Si microwires with ferromagnetic Ni and Co coatings are oriented vertically in the presence of magnetic fields. The degree of vertical alignment and threshold field strength depend on geometric factors, such as microwire length and ferromagnetic coating thickness, as well as interfacial interactions, which are modulated by varying solvent and substrate surface chemistry. Microwire ensembles with vertical alignment over 97% within 10 degrees of normal, as measured by X-ray diffraction, are achieved over square cm scale areas and set into flexible polymer films. A force balance model has been developed as a predictive tool for magnetic alignment, incorporating magnetic torque and empirically derived surface adhesion parameters. As supported by these calculations, microwires are shown to detach from the surface and align vertically in the presence of magnetic fields on the order of 100 gauss. Microwires aligned in this manner are set into a polydimethylsiloxane film where they retain their vertical alignment after the field has been removed and can subsequently be used as a flexible solar absorber layer. Finally, these microwires arrays can be protected for use in electrochemical cells by the conformal deposition of a graphene layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers have spent decades refining and improving their methods for fabricating smaller, finer-tuned, higher-quality nanoscale optical elements with the goal of making more sensitive and accurate measurements of the world around them using optics. Quantum optics has been a well-established tool of choice in making these increasingly sensitive measurements which have repeatedly pushed the limits on the accuracy of measurement set forth by quantum mechanics. A recent development in quantum optics has been a creative integration of robust, high-quality, and well-established macroscopic experimental systems with highly-engineerable on-chip nanoscale oscillators fabricated in cleanrooms. However, merging large systems with nanoscale oscillators often require them to have extremely high aspect-ratios, which make them extremely delicate and difficult to fabricate with an "experimentally reasonable" repeatability, yield and high quality. In this work we give an overview of our research, which focused on microscopic oscillators which are coupled with macroscopic optical cavities towards the goal of cooling them to their motional ground state in room temperature environments. The quality factor of a mechanical resonator is an important figure of merit for various sensing applications and observing quantum behavior. We demonstrated a technique for pushing the quality factor of a micromechanical resonator beyond conventional material and fabrication limits by using an optical field to stiffen and trap a particular motional mode of a nanoscale oscillator. Optical forces increase the oscillation frequency by storing most of the mechanical energy in a nearly loss-less optical potential, thereby strongly diluting the effects of material dissipation. By placing a 130 nm thick SiO2 pendulum in an optical standing wave, we achieve an increase in the pendulum center-of-mass frequency from 6.2 to 145 kHz. The corresponding quality factor increases 50-fold from its intrinsic value to a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4π Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a Si3N4 membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents, drastically reducing the parasitic capacitance of our superconducting circuit to ≈ 2.5$ fF. This opens up a number of possibilities in making a new class of low-loss high-frequency electromechanics with relatively large electromechanical coupling. We present our substrate studies, fabrication methods, and device characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A person living in an industrialized society has almost no choice but to receive information daily with negative implications for himself or others. His attention will often be drawn to the ups and downs of economic indicators or the alleged misdeeds of leaders and organizations. Reacting to new information is central to economics, but economics typically ignores the affective aspect of the response, for example, of stress or anger. These essays present the results of considering how the affective aspect of the response can influence economic outcomes.

The first chapter presents an experiment in which individuals were presented with information about various non-profit organizations and allowed to take actions that rewarded or punished those organizations. When social interaction was introduced into this environment an asymmetry between rewarding and punishing appeared. The net effects of punishment became greater and more variable, whereas the effects of reward were unchanged. The individuals were more strongly influenced by negative social information and used that information to target unpopular organizations. These behaviors contributed to an increase in inequality among the outcomes of the organizations.

The second and third chapters present empirical studies of reactions to negative information about local economic conditions. Economic factors are among the most prevalent stressors, and stress is known to have numerous negative effects on health. These chapters document localized, transient effects of the announcement of information about large-scale job losses. News of mass layoffs and shut downs of large military bases are found to decrease birth weights and gestational ages among babies born in the affected regions. The effect magnitudes are close to those estimated in similar studies of disasters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.