15 resultados para LARGE DELETION

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forced vibration field tests and finite element studies have been conducted on Morrow Point (arch) Dam in order to investigate dynamic dam-water interaction and water compressibility. Design of the data acquisition system incorporates several special features to retrieve both amplitude and phase of the response in a low signal to noise environment. These features contributed to the success of the experimental program which, for the first time, produced field evidence of water compressibility; this effect seems to play a significant role only in the symmetric response of Morrow Point Dam in the frequency range examined. In the accompanying analysis, frequency response curves for measured accelerations and water pressures as well as their resonating shapes are compared to predictions from the current state-of-the-art finite element model for which water compressibility is both included and neglected. Calibration of the numerical model employs the antisymmetric response data since they are only slightly affected by water compressibility, and, after calibration, good agreement to the data is obtained whether or not water compressibility is included. In the effort to reproduce the symmetric response data, on which water compressibility has a significant influence, the calibrated model shows better correlation when water compressibility is included, but the agreement is still inadequate. Similar results occur using data obtained previously by others at a low water level. A successful isolation of the fundamental water resonance from the experimental data shows significantly different features from those of the numerical water model, indicating possible inaccuracy in the assumed geometry and/or boundary conditions for the reservoir. However, the investigation does suggest possible directions in which the numerical model can be improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The genomes of many positive stranded RNA viruses and of all retroviruses are translated as large polyproteins which are proteolytically processed by cellular and viral proteases. Viral proteases are structurally related to two families of cellular proteases, the pepsin-like and trypsin-like proteases. This thesis describes the proteolytic processing of several nonstructural proteins of dengue 2 virus, a representative member of the Flaviviridae, and describes methods for transcribing full-length genomic RNA of dengue 2 virus. Chapter 1 describes the in vitro processing of the nonstructural proteins NS2A, NS2B and NS3. Chapter 2 describes a system that allows identification of residues within the protease that are directly or indirectly involved with substrate recognition. Chapter 3 describes methods to produce genome length dengue 2 RNA from cDNA templates.

The nonstructural protein NS3 is structurally related to viral trypsinlike proteases from the alpha-, picorna-, poty-, and pestiviruses. The hypothesis that the flavivirus nonstructural protein NS3 is a viral proteinase that generates the termini of several nonstructural proteins was tested using an efficient in vitro expression system and antisera specific for the nonstructural proteins NS2B and NS3. A series of cDNA constructs was transcribed using T7 RNA polymerase and the RNA translated in reticulocyte lysates. Proteolytic processing occurred in vitro to generate NS2B and NS3. The amino termini of NS2B and NS3 produced in vitro were found to be the same as the termini of NS2B and NS3 isolated from infected cells. Deletion analysis of cDNA constructs localized the protease domain necessary and sufficient for correct cleavage to the first 184 amino acids of NS3. Kinetic analysis of processing events in vitro and experiments to examine the sensitivity of processing to dilution suggested that an intramolecular cleavage between NS2A and NS2B preceded an intramolecular cleavage between NS2B and NS3. The data from these expression experiments confirm that NS3 is the viral proteinase responsible for cleavage events generating the amino termini of NS2B and NS3 and presumably for cleavages generating the termini of NS4A and NS5 as well.

Biochemical and genetic experiments using viral proteinases have defined the sequence requirements for cleavage site recognition, but have not identified residues within proteinases that interact with substrates. A biochemical assay was developed that could identify residues which were important for substrate recognition. Chimeric proteases between yellow fever and dengue 2 were constructed that allowed mapping of regions involved in substrate recognition, and site directed mutagenesis was used to modulate processing efficiency.

Expression in vitro revealed that the dengue protease domain efficiently processes the yellow fever polyprotein between NS2A and NS2B and between NS2B and NS3, but that the reciprocal construct is inactive. The dengue protease processes yellow fever cleavage sites more efficiently than dengue cleavage sites, suggesting that suboptimal cleavage efficiency may be used to increase levels of processing intermediates in vivo. By mutagenizing the putative substrate binding pocket it was possible to change the substrate specificity of the yellow fever protease; changing a minimum of three amino acids in the yellow fever protease enabled it to recognize dengue cleavage sites. This system allows identification of residues which are directly or indirectly involved with enzyme-substrate interaction, does not require a crystal structure, and can define the substrate preferences of individual members of a viral proteinase family.

Full-length cDNA clones, from which infectious RNA can be transcribed, have been developed for a number of positive strand RNA viruses, including the flavivirus type virus, yellow fever. The technology necessary to transcribe genomic RNA of dengue 2 virus was developed in order to better understand the molecular biology of the dengue subgroup. A 5' structural region clone was engineered to transcribe authentic dengue RNA that contains an additional 1 or 2 residues at the 5' end. A 3' nonstructural region clone was engineered to allow production of run off transcripts, and to allow directional ligation with the 5' structural region clone. In vitro ligation and transcription produces full-length genomic RNA which is noninfectious when transfected into mammalian tissue culture cells. Alternative methods for constructing cDNA clones and recovering live dengue virus are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report measurements of isotope abundance ratios for 5-50 MeV/nuc nuclei from a large solar flare that occurred on September 23, 1978. The measurements were made by the Heavy Isotope Spectrometer Telescope (HIST) on the ISEE-3 satellite orbiting the Sun near an Earth-Sun libration point approximately one million miles sunward of the Earth. We report finite values for the isotope abundance ratios 13C/12C, 15N/14N, 18O/16O, 22Ne/ 20Ne, 25Mg/24Mg, and 26Mg/24Mg, and upper limits for the isotope abundance ratios 3He/4He, 14C/12C, 17O/16O, and 21Ne/20Ne.

We measured element abundances and spectra to compare the September 23, 1978 flare with other flares reported in the literature. The flare is a typical large flare with "low" Fe/O abundance (≤ 0.1).

For 13C/12C, 15N/14N, 18O/16O, 25Mg/ 24Mg, and 26Mg/24Mg, our measured isotope abundance ratios agree with the solar system abundance ratios of Cameron (1981). For neon we measure 22Ne/20Ne = 0.109 + 0.026 - 0.019, a value that is different with confidence 97.5% from the abundance measured in the solar wind by Geiss at al. (1972) of 22Ne/20Ne = 0.073 ± 0.001. Our measurement for 22Ne/20Ne agrees with the isotopic composition of the meteoritic component neon-A.

Separate arguments appear to rule out simple mass fractionation in the solar wind and in our solar energetic particle measurements as the cause of the discrepancy in the comparison of the apparent compositions of these two sources of solar material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to sense mechanical force is vital to all organisms to interact with and respond to stimuli in their environment. Mechanosensation is critical to many physiological functions such as the senses of hearing and touch in animals, gravitropism in plants and osmoregulation in bacteria. Of these processes, the best understood at the molecular level involve bacterial mechanosensitive channels. Under hypo-osmotic stress, bacteria are able to alleviate turgor pressure through mechanosensitive channels that gate directly in response to tension in the membrane lipid bilayer. A key participant in this response is the mechanosensitive channel of large conductance (MscL), a non-selective channel with a high conductance of ~3 nS that gates at tensions close to the membrane lytic tension.

It has been appreciated since the original discovery by C. Kung that the small subunit size (~130 to 160 residues) and the high conductance necessitate that MscL forms a homo-oligomeric channel. Over the past 20 years of study, the proposed oligomeric state of MscL has ranged from monomer to hexamer. Oligomeric state has been shown to vary between MscL homologues and is influenced by lipid/detergent environment. In this thesis, we report the creation of a chimera library to systematically survey the correlation between MscL sequence and oligomeric state to identify the sequence determinants of oligomeric state. Our results demonstrate that although there is no combination of sequences uniquely associated with a given oligomeric state (or mixture of oligomeric states), there are significant correlations. In the quest to characterize the oligomeric state of MscL, an exciting discovery was made about the dynamic nature of the MscL complex. We found that in detergent solution, under mild heating conditions (37 °C – 60 °C), subunits of MscL can exchange between complexes, and the dynamics of this process are sensitive to the protein sequence.

Extensive efforts were made to produce high diffraction quality crystals of MscL for the determination of a high resolution X-ray crystal structure of a full length channel. The surface entropy reduction strategy was applied to the design of S. aureus MscL variants and while the strategy appears to have improved the crystallizability of S. aureus MscL, unfortunately the diffraction qualities of these crystals were not significantly improved. MscL chimeras were also screened for crystallization in various solubilization detergents, but also failed to yield high quality crystals.

MscL is a fascinating protein and continues to serve as a model system for the study of the structural and functional properties of mechanosensitive channels. Further characterization of the MscL chimera library will offer more insight into the characteristics of the channel. Of particular interest are the functional characterization of the chimeras and the exploration of the physiological relevance of intercomplex subunit exchange.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.

Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.

An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.

A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.

Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.

An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".

Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.

What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high computational cost of correlated wavefunction theory (WFT) calculations has motivated the development of numerous methods to partition the description of large chemical systems into smaller subsystem calculations. For example, WFT-in-DFT embedding methods facilitate the partitioning of a system into two subsystems: a subsystem A that is treated using an accurate WFT method, and a subsystem B that is treated using a more efficient Kohn-Sham density functional theory (KS-DFT) method. Representation of the interactions between subsystems is non-trivial, and often requires the use of approximate kinetic energy functionals or computationally challenging optimized effective potential calculations; however, it has recently been shown that these challenges can be eliminated through the use of a projection operator. This dissertation describes the development and application of embedding methods that enable accurate and efficient calculation of the properties of large chemical systems.

Chapter 1 introduces a method for efficiently performing projection-based WFT-in-DFT embedding calculations on large systems. This is accomplished by using a truncated basis set representation of the subsystem A wavefunction. We show that naive truncation of the basis set associated with subsystem A can lead to large numerical artifacts, and present an approach for systematically controlling these artifacts.

Chapter 2 describes the application of the projection-based embedding method to investigate the oxidative stability of lithium-ion batteries. We study the oxidation potentials of mixtures of ethylene carbonate (EC) and dimethyl carbonate (DMC) by using the projection-based embedding method to calculate the vertical ionization energy (IE) of individual molecules at the CCSD(T) level of theory, while explicitly accounting for the solvent using DFT. Interestingly, we reveal that large contributions to the solvation properties of DMC originate from quadrupolar interactions, resulting in a much larger solvent reorganization energy than that predicted using simple dielectric continuum models. Demonstration that the solvation properties of EC and DMC are governed by fundamentally different intermolecular interactions provides insight into key aspects of lithium-ion batteries, with relevance to electrolyte decomposition processes, solid-electrolyte interphase formation, and the local solvation environment of lithium cations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of prophage integration by phage λ and the function and structure of the chromosomal elements required for λ integration have been studied with the use of λ deletion mutants. Since attφ, the substrate of the integration enzymes, is not essential for λ growth, and since attφ resides in a portion of the λ chromosome which is not necessary for vegetative growth, viable λ deletion mutants were isolated and examined to dissect the structure of attφ.

Deletion mutants were selected from wild type populations by treating the phage under conditions where phage are inactivated at a rate dependent on the DNA content of the particles. A number of deletion mutants were obtained in this way, and many of these mutants proved to have defects in integration. These defects were defined by analyzing the properties of Int-promoted recombination in these att mutants.

The types of mutants found and their properties indicated that attφ has three components: a cross-over point which is bordered on either side by recognition elements whose sequence is specifically required for normal integration. The interactions of the recognition elements in Int-promoted recombination between att mutants was examined and proved to be quite complex. In general, however, it appears that the λ integration system can function with a diverse array of mutant att sites.

The structure of attφ was examined by comparing the genetic properties of various att mutants with their location in the λ chromosome. To map these mutants, the techniques of heteroduplex DNA formation and electron microscopy were employed. It was found that integration cross-overs occur at only one point in attφ and that the recognition sequences that direct the integration enzymes to their site of action are quite small, less than 2000 nucleotides each. Furthermore, no base pair homology was detected between attφ and its bacterial analog, attB. This result clearly demonstrates that λ integration can occur between chromosomes which have little, if any, homology. In this respect, λ integration is unique as a system of recombination since most forms of generalized recombination require extensive base pair homology.

An additional study on the genetic and physical distances in the left arm of the λ genome was described. Here, a large number of conditional lethal nonsense mutants were isolated and mapped, and a genetic map of the entire left arm, comprising a total of 18 genes, was constructed. Four of these genes were discovered in this study. A series of λdg transducing phages was mapped by heteroduplex electron microscopy and the relationship between physical and genetic distances in the left arm was determined. The results indicate that recombination frequency in the left arm is an accurate reflection of physical distances, and moreover, there do not appear to be any undiscovered genes in this segment of the genome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large plane deformations of thin elastic sheets of neo-Hookean material are considered and a method of successive substitutions is developed to solve problems within the two-dimensional theory of finite plane stress. The first approximation is determined by linear boundary value problems on two harmonic functions, and it is approached asymptotically at very large extensions in the plane of the sheet. The second and higher approximations are obtained by solving Poisson equations. The method requires modification when the membrane has a traction-free edge.

Several problems are treated involving infinite sheets under uniform biaxial stretching at infinity. First approximations are obtained when a circular or elliptic inclusion is present and when the sheet has a circular or elliptic hole, including the limiting cases of a line inclusion and a straight crack or slit. Good agreement with exact solutions is found for circularly symmetric deformations. Other examples discuss the stretching of a short wide strip, the deformation near a boundary corner which is traction-free, and the application of a concentrated load to a boundary point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A theory of electromagnetic absorption is presented to explain the changes in surface impedance for Pippard superconductors (ξo ≫λ) due to large static magnetic fields. The static magnetic field penetrating the metal near the surface induces a momentum dependent potential in Bogolubov's equations. Such a potential modifies a quasiparticle's wavefunction and excitation spectrum. These changes affect the behavior of the surface impedance in a way that in large measure agrees with available observations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large array has been used to investigate the P-wave velocity structure of the lower mantle. Linear array processing methods are reviewed and a method of nonlinear processing is presented. Phase velocities, travel times, and relative amplitudes of P waves have been measured with the large array at the Tonto Forest Seismological Observatory in Arizona for 125 earthquakes in the distance range of 30 to 100 degrees. Various models are assumed for the upper 771 km of the mantle and the Wiechert-Herglotz method applied to the phase velocity data to obtain a velocity depth structure for the lower mantle. The phase velocity data indicates the presence of a second-order discontinuity at a depth of 840 km, another at 1150 km, and less pronounced discontinuities at 1320, 1700 and 1950 km. Phase velocities beyond 85 degrees are interpreted in terms of a triplication of the phase velocity curve, and this results in a zone of almost constant velocity between depths of 2670 and 2800 km. Because of the uncertainty in the upper mantle assumptions, a final model cannot be proposed, but it appears that the lower mantle is more complicated than the standard models and there is good evidence for second-order discontinuities below a depth of 1000 km. A tentative lower bound of 2881 km can be placed on the depth to the core. The importance of checking the calculated velocity structure against independently measured travel times is pointed out. Comparisons are also made with observed PcP times and the agreement is good. The method of using measured values of the rate of change of amplitude with distances shows promising results.