32 resultados para mismatched uncertainties
em CaltechTHESIS
Resumo:
Oligonucleotide-directed triple helix formation is one of the most versatile methods for the sequence specific recognition of double helical DNA. Chapter 2 describes affinity cleaving experiments carried out to assess the recognition potential for purine-rich oligonucleotides via the formation of triple helices. Purine-rich oligodeoxyribonucleotides were shown to bind specifically to purine tracts of double helical DNA in the major groove antiparallel to the purine strand of the duplex. Specificity was derived from the formation of reverse Hoogsteen G•GC, A•AT and T•AT triplets and binding was limited to mostly purine tracts. This triple helical structure was stabilized by multivalent cations, destabilized by high concentrations of monovalent cations and was insensitive to pH. A single mismatched base triplet was shown to destabilize a 15 mer triple helix by 1.0 kcal/mole at 25°C. In addition, stability appeared to be correlated to the number of G•GC triplets formed in the triple helix. This structure provides an additional framework as a basis for the design of new sequence specific DNA binding molecules.
In work described in Chapter 3, the triplet specificities and required strand orientations of two classes of DNA triple helices were combined to target double helical sequences containing all four base pairs by alternate strand triple helix formation. This allowed for the use of oligonucleotides containing only natural 3'-5' phosphodiester linkages to simultaneously bind both strands of double helical DNA in the major groove. The stabilities and structures of these alternate strand triple helices depended on whether the binding site sequence was 5'-(purine)_m (pyrimidine)_n-3' or 5'- (pyrimidine)_m (purine)_n-3'.
In Chapter 4, the ability of oligonucleotide-cerium(III) chelates to direct the transesterfication of RNA was investigated. Procedures were developed for the modification of DNA and RNA oligonucleotides with a hexadentate Schiff-base macrocyclic cerium(III) complex. In addition, oligoribonucleotides modified by covalent attachment of the metal complex through two different linker structures were prepared. The ability of these structures to direct transesterification to specific RNA phosphodiesters was assessed by gel electrophoresis. No reproducible cleavage of the RNA strand consistent with transesterification could be detected in any of these experiments.
Resumo:
Hartree-Fock (HF) calculations have had remarkable success in describing large nuclei at high spin, temperature and deformation. To allow full range of possible deformations, the Skyrme HF equations can be discretized on a three-dimensional mesh. However, such calculations are currently limited by the computational resources provided by traditional supercomputers. To take advantage of recent developments in massively parallel computing technology, we have implemented the LLNL Skyrme-force static and rotational HF codes on Intel's DELTA and GAMMA systems at Caltech.
We decomposed the HF code by assigning a portion of the mesh to each node, with nearest neighbor meshes assigned to nodes connected by communication· channels. This kind of decomposition is well-suited for the DELTA and the GAMMA architecture because the only non-local operations are wave function orthogonalization and the boundary conditions of the Poisson equation for the Coulomb field.
Our first application of the HF code on parallel computers has been the study of identical superdeformed (SD) rotational bands in the Hg region. In the last ten years, many SD rotational bands have been found experimentally. One very surprising feature found in these SD rotational bands is that many pairs of bands in nuclei that differ by one or two mass units have nearly identical deexcitation gamma-ray energies. Our calculations of the five rotational bands in ^(192)Hg and ^(194)Pb show that the filling of specific orbitals can lead to bands with deexcitation gamma-ray energies differing by at most 2 keV in nuclei differing by two mass units and over a range of angular momenta comparable to that observed experimentally. Our calculations of SD rotational bands in the Dy region also show that twinning can be achieved by filling or emptying some specific orbitals.
The interpretation of future precise experiments on atomic parity nonconservation (PNC) in terms of parameters of the Standard Model could be hampered by uncertainties in the atomic and nuclear structure. As a further application of the massively parallel HF calculations, we calculated the proton and neutron densities of the Cesium isotopes from A = 125 to A = 139. Based on our good agreement with experimental charge radii, binding energies, and ground state spins, we conclude that the uncertainties in the ratios of weak charges are less than 10^(-3), comfortably smaller than the anticipated experimental error.
Resumo:
We perform a measurement of direct CP violation in b to s+gamma Acp, and the measurement of a difference between Acp for neutral B and charged B mesons, Delta A_{X_s\gamma}, using 429 inverse femtobarn of data recorded at the Upsilon(4S) resonance with the BABAR detector. B mesons are reconstructed from 16 exclusive final states. Particle identification is done using an algorithm based on Error Correcting Output Code with an exhaustive matrix. Background rejection and best candidate selection are done using two decision tree-based classifiers. We found $\acp = 1.73%+-1.93%+-1.02% and Delta A_X_sgamma = 4.97%+-3.90%+-1.45% where the uncertainties are statistical and systematic respectively. Based on the measured value of Delta A_X_sgamma, we determine a 90% confidence interval for Im C_8g/C_7gamma, where C_7gamma and C_8g are Wilson coefficients for New Physics amplitudes, at -1.64 < Im C_8g/C_7gamma < 6.52.
Resumo:
Seismic structure above and below the core-mantle boundary (CMB) has been studied through use of travel time and waveform analyses of several different seismic wave groups. Anomalous systematic trends in observables document mantle heterogeneity on both large and small scales. Analog and digital data has been utilized, and in many cases the analog data has been optically scanned and digitized prior to analysis.
Differential travel times of S - SKS are shown to be an excellent diagnostic of anomalous lower mantle shear velocity (V s) structure. Wavepath geometries beneath the central Pacific exhibit large S- SKS travel time residuals (up to 10 sec), and are consistent with a large scale 0(1000 km) slower than average V_s region (≥3%). S - SKS times for paths traversing this region exhibit smaller scale patterns and trends 0(100 km) indicating V_s perturbations on many scale lengths. These times are compared to predictions of three tomographically derived aspherical models: MDLSH of Tanimoto [1990], model SH12_WM13 of Suet al. [1992], and model SH.10c.17 of Masters et al. [1992]. Qualitative agreement between the tomographic model predictions and observations is encouraging, varying from fair to good. However, inconsistencies are present and suggest anomalies in the lower mantle of scale length smaller than the present 2000+ km scale resolution of tomographic models. 2-D wave propagation experiments show the importance of inhomogeneous raypaths when considering lateral heterogeneities in the lowermost mantle.
A dataset of waveforms and differential travel times of S, ScS, and the arrival from the D" layer, Scd, provides evidence for a laterally varying V_s velocity discontinuity at the base of the mantle. Two different localized D" regions beneath the central Pacific have been investigated. Predictions from a model having a V_s discontinuity 180 km above the CMB agree well with observations for an eastern mid-Pacific CMB region. This thickness differs from V_s discontinuity thicknesses found in other regions, such as a localized region beneath the western Pacific, which average near 280 km. The "sharpness" of the V_s jump at the top of D", i.e., the depth range over which the V_s increase occurs, is not resolved by our data, and our data can in fact may be modeled equally well by a lower mantle with the increase in V_s at the top of D" occurring over a 100 krn depth range. It is difficult at present to correlate D" thicknesses from this study to overall lower mantle heterogeneity, due to uncertainties in the 3-D models, as well as poor coverage in maps of D" discontinuity thicknesses.
P-wave velocity structure (V_p) at the base of the mantle is explored using the seismic phases SKS and SPdKS. SPdKS is formed when SKS waves at distances around 107° are incident upon the CMB with a slowness that allows for coupling with diffracted P-waves at the base of the mantle. The P-wave diffraction occurs at both the SKS entrance and exit locations of the outer core. SP_dKS arrives slightly later in time than SKS, having a wave path through the mantle and core very close to SKS. The difference time between SKS and SP_dKS strongly depends on V_p at the base of the mantle near SK Score entrance and exit points. Observations from deep focus Fiji-Tonga events recorded by North American stations, and South American events recorded by European and Eurasian stations exhibit anomalously large SP_dKS - SKS difference times. SKS and the later arriving SP_dKS phase are separated by several seconds more than predictions made by 1-D reference models, such as the global average PREM [Dziewonski and Anderson, 1981] model. Models having a pronounced low-velocity zone (5%) in V_p in the bottom 50-100 km of the mantle predict the size of the observed SP_dK S-SKS anomalies. Raypath perturbations from lower mantle V_s structure may also be contributing to the observed anomalies.
Outer core structure is investigated using the family of SmKS (m=2,3,4) seismic waves. SmKS are waves that travel as S-waves in the mantle, P-waves in the core, and reflect (m-1) times on the underside of the CMB, and are well-suited for constraining outermost core V_p structure. This is due to closeness of the mantle paths and also the shallow depth range these waves travel in the outermost core. S3KS - S2KS and S4KS - S3KS differential travel times were measured using the cross-correlation method and compared to those from reflectivity synthetics created from core models of past studies. High quality recordings from a deep focus Java Sea event which sample the outer core beneath the northern Pacific, the Arctic, and northwestern North America (spanning 1/8th of the core's surface area), have SmKS wavepaths that traverse regions where lower mantle heterogeneity is pre- dieted small, and are well-modeled by the PREM core model, with possibly a small V_p decrease (1.5%) in the outermost 50 km of the core. Such a reduction implies chemical stratification in this 50 km zone, though this model feature is not uniquely resolved. Data having wave paths through areas of known D" heterogeneity (±2% and greater), such as the source-side of SmKS lower mantle paths from Fiji-Tonga to Eurasia and Africa, exhibit systematic SmKS differential time anomalies of up to several seconds. 2-D wave propagation experiments demonstrate how large scale lower mantle velocity perturbations can explain long wavelength behavior of such anomalous SmKS times. When improperly accounted for, lower mantle heterogeneity maps directly into core structure. Raypaths departing from homogeneity play an important role in producing SmKS anomalies. The existence of outermost core heterogeneity is difficult to resolve at present due to uncertainties in global lower mantle structure. Resolving a one-dimensional chemically stratified outermost core also remains difficult due to the same uncertainties. Restricting study to higher multiples of SmKS (m=2,3,4) can help reduce the affect of mantle heterogeneity due to the closeness of the mantle legs of the wavepaths. SmKS waves are ideal in providing additional information on the details of lower mantle heterogeneity.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
To explain the ^(26)Mg isotopic anomaly seen in meteorites (^(26)Al daughter) as well as the observation of 1809-keV γ rays in the interstellar medium (live decay of 26Al) one must know, among other things, the destruction rate of ^(26)Al. Properties of states in ^(27)Si just above the ^(26)Al + p mass were investigated to determine the destruction rate of ^(26)Al via the ^(26)Al(p,γ)^(27)Si reaction at astrophysical temperatures.
Twenty micrograms of ^(26)Al were used to produce two types of Al_2O_3 targets by evaporation of the oxide. One was onto a thick platinum backing suitable for (p,γ) work, and the other onto a thin carbon foil for the (^3He,d) reaction.
The ^(26)Al(p,γ)^(27)Si excitation function, obtained using a germanium detector and voltage-ramped target, confirmed known resonances and revealed new ones at 770, 847, 876, 917, and 928 keV. Possible resonances below the lowest observed one at E_p = 286 keV were investigated using the ^(26)Al(^3He,d)^(27)Si proton-transfer reaction. States in 27Si corresponding to 196- and 286-keV proton resonances were observed. A possible resonance at 130 keV (postulated in prior work) was shown to have a strength of wγ less than 0.02 µeV.
By arranging four large Nal detector as a 47π calorimeter, the 196-keV proton resonance, and one at 247 keV, were observed directly, having wγ = 55± 9 and 10 ± 5 µeV, respectively.
Large uncertainties in the reaction rate have been reduced. At novae temperatures, the rate is about 100 times faster than that used in recent model calculations, casting some doubt on novae production of galactic ^(26)Al.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
In this thesis I apply paleomagnetic techniques to paleoseismological problems. I investigate the use of secular-variation magnetostratigraphy to date prehistoric earthquakes; I identify liquefaction remanent magnetization (LRM), and I quantify coseismic deformation within a fault zone by measuring the rotation of paleomagnetic vectors.
In Chapter 2 I construct a secular-variation reference curve for southern California. For this curve I measure three new well-constrained paleomagnetic directions: two from the Pallett Creek paleoseismological site at A.D. 1397-1480 and A.D. 1465-1495, and one from Panum Crater at A.D. 1325-1365. To these three directions I add the best nine data points from the Sternberg secular-variation curve, five data points from Champion, and one point from the A.D. 1480 eruption of Mt. St. Helens. I derive the error due to the non-dipole field that is added to these data by the geographical correction to southern California. Combining these yields a secular variation curve for southern California covering the period A.D. 670 to 1910, with the best coverage in the range A.D. 1064 to 1505.
In Chapter 3 I apply this curve to a problem in southern California. Two paleoseismological sites in the Salton trough of southern California have sediments deposited by prehistoric Lake Cahuilla. At the Salt Creek site I sampled sediments from three different lakes, and at the Indio site I sampled sediments from four different lakes. Based upon the coinciding paleomagnetic directions I correlate the oldest lake sampled at Salt Creek with the oldest lake sampled at Indio. Furthermore, the penultimate lake at Indio does not appear to be present at Salt Creek. Using the secular variation curve I can assign the lakes at Salt Creek to broad age ranges of A.D. 800 to 1100, A.D. 1100 to 1300, and A.D. 1300 to 1500. This example demonstrates the large uncertainties in the secular variation curve and the need to construct curves from a limited geographical area.
Chapter 4 demonstrates that seismically induced liquefaction can cause resetting of detrital remanent magnetization and acquisition of a liquefaction remanent magnetization (LRM). I sampled three different liquefaction features, a sandbody formed in the Elsinore fault zone, diapirs from sediments of Mono Lake, and a sandblow in these same sediments. In every case the liquefaction features showed stable magnetization despite substantial physical disruption. In addition, in the case of the sandblow and the sandbody, the intensity of the natural remanent magnetization increased by up to an order of magnitude.
In Chapter 5 I apply paleomagnetics to measuring the tectonic rotations in a 52 meter long transect across the San Andreas fault zone at the Pallett Creek paleoseismological site. This site has presented a significant problem because the brittle long-term average slip-rate across the fault is significantly less than the slip-rate from other nearby sites. I find sections adjacent to the fault with tectonic rotations of up to 30°. If interpreted as block rotations, the non-brittle offset was 14.0+2.8, -2.1 meters in the last three earthquakes and 8.5+1.0, -0.9 meters in the last two. Combined with the brittle offset in these events, the last three events all had about 6 meters of total fault offset, even though the intervals between them were markedly different.
In Appendix 1 I present a detailed description of my standard sampling and demagnetization procedure.
In Appendix 2 I present a detailed discussion of the study at Panum Crater that yielded the well-constrained paleomagnetic direction for use in developing secular variation curve in Chapter 2. In addition, from sampling two distinctly different clast types in a block-and-ash flow deposit from Panum Crater, I find that this flow had a complex emplacement and cooling history. Angular, glassy "lithic" blocks were emplaced at temperatures above 600° C. Some of these had cooled nearly completely, whereas others had cooled only to 450° C, when settling in the flow rotated the blocks slightly. The partially cooled blocks then finished cooling without further settling. Highly vesicular, breadcrusted pumiceous clasts had not yet cooled to 600° C at the time of these rotations, because they show a stable, well clustered, unidirectional magnetic vector.
Resumo:
We report measurements of the proton form factors, G^p_E and G^p_M, extracted from elastic electron scattering in the range 1 ≤ Q^2 ≤ 3 (GeV/c)^2 with uncertainties of <15% in G^p_E and <3% in G^p_M. The results for G^p_E are somewhat larger than indicated by most theoretical parameterizations. The ratio of Pauli and Dirac form factors, Q^2(F^p_2/F^p_1), is lower in value and demonstrates less Q^2 dependence than these parameterizations have indicated. Comparisons are made to theoretical models, including those based on perturbative QCD, vector-meson dominance, QCD sum rules, and diquark constituents to the proton. A global extraction of the form factors, including previous elastic scattering measurements, is also presented.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
The discovery that the three ring polyamide Im-Py-Py-Dp containing imidazole (Im) and pyrrole (Py) carboxamides binds the DNA sequence 5'-(A,T)G(A,T)C(A,T)-3' as an antiparallel dimer offers a new model for the design of ligands for specific recognition of sequences in the minor groove containing both G,C and A,T base pairs. In Chapter 2, experiments are described in which the sequential addition of five N- methylpyrrolecarboxamides to the imidazole-pyrrole polyamide Im-Py-Py-Dp affords a series of six homologous polyamides, Im-(Py)2-7-Dp, that differ in the size of their binding site, apparent first order binding affinity, and sequence specificity. These results demonstrate that DNA sequences up to nine base pairs in length can be specifically recognized by imidazole-pyrrole polyamides containing three to seven rings by 2:1 polyamide-DNA complex formation in the minor groove. Recognition of a nine base pair site defines the new lower limit of the binding site size that can be recognized by polyamides containing exclusively imidazole and pyrrolecarboxamides. The results of this study should provide useful guidelines for the design of new polyamides that bind longer DNA sites with enhanced affinity and specificity.
In Chapter 3 the design and synthesis of the hairpin polyamide Im-Py-Im-Py-γ-Im- Py-Im-Py-Dp is described. Quantitative DNase I footprint titration experiments reveal that Im-Py-Im-Py-γ-Im-Py-Im-Py-Dp binds six base pair 5'-(A,T)GCGC(A,T)-3' sequences with 30-fold higher affinity than the unlinked polyamide Im-Py-Im-Py-Dp. The hairpin polyamide does not discriminate between A•T and T•A at the first and sixth positions of the binding site as three sites 5'-TGCGCT-3', 5'-TGCGCA-3', and 5 'AGCGCT- 3' are bound with similar affinity. However, Im-Py-Im-Py-γ-Im-Py-Im-PyDp is specific for and discriminates between G•C and C•G base pairs in the 5'-GCGC-3' core as evidenced by lower affinities for the mismatched sites 5'-AACGCA-3', 5'- TGCGTT-3', 5'-TGCGGT-3', and 5'-ACCGCT-3'.
In Chapter 4, experiments are described in which a kinetically stable hexa-aza Schiff base La3+ complex is covalently attached to a Tat(49-72) peptide which has been shown to bind the HIV-1 TAR RNA sequence. Although these metallo-peptides cleave TAR site-specifically in the hexanucleotide loop to afford products consistent with hydrolysis, a series of control experiments suggests that the observed cleavage is not caused by a sequence-specifically bound Tat(49-72)-La(L)3+ peptide.
Resumo:
This thesis presents a technique for obtaining the response of linear structural systems with parameter uncertainties subjected to either deterministic or random excitation. The parameter uncertainties are modeled as random variables or random fields, and are assumed to be time-independent. The new method is an extension of the deterministic finite element method to the space of random functions.
First, the general formulation of the method is developed, in the case where the excitation is deterministic in time. Next, the application of this formulation to systems satisfying the one-dimensional wave equation with uncertainty in their physical properties is described. A particular physical conceptualization of this equation is chosen for study, and some engineering applications are discussed in both an earthquake ground motion and a structural context.
Finally, the formulation of the new method is extended to include cases where the excitation is random in time. Application of this formulation to the random response of a primary-secondary system is described. It is found that parameter uncertainties can have a strong effect on the system response characteristics.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Chapter I
Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.
Chapter II
A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.
EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.
EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.
Chapter III
A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.