10 resultados para Laboratory test

em CaltechTHESIS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Liquefaction is a devastating instability associated with saturated, loose, and cohesionless soils. It poses a significant risk to distributed infrastructure systems that are vital for the security, economy, safety, health, and welfare of societies. In order to make our cities resilient to the effects of liquefaction, it is important to be able to identify areas that are most susceptible. Some of the prevalent methodologies employed to identify susceptible areas include conventional slope stability analysis and the use of so-called liquefaction charts. However, these methodologies have some limitations, which motivate our research objectives. In this dissertation, we investigate the mechanics of origin of liquefaction in a laboratory test using grain-scale simulations, which helps (i) understand why certain soils liquefy under certain conditions, and (ii) identify a necessary precursor for onset of flow liquefaction. Furthermore, we investigate the mechanics of liquefaction charts using a continuum plasticity model; this can help in modeling the surface hazards of liquefaction following an earthquake. Finally, we also investigate the microscopic definition of soil shear wave velocity, a soil property that is used as an index to quantify liquefaction resistance of soil. We show that anisotropy in fabric, or grain arrangement can be correlated with anisotropy in shear wave velocity. This has the potential to quantify the effects of sample disturbance when a soil specimen is extracted from the field. In conclusion, by developing a more fundamental understanding of soil liquefaction, this dissertation takes necessary steps for a more physical assessment of liquefaction susceptibility at the field-scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.

During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.

Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.

The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.

Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.

The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metallic glasses have typically been treated as a “one size fits all” type of material. Every alloy is considered to have high strength, high hardness, large elastic limits, corrosion resistance, etc. However, similar to traditional crystalline materials, properties are strongly dependent upon the constituent elements, how it was processed, and the conditions under which it will be used. An important distinction which can be made is between metallic glasses and their composites. Charpy impact toughness measurements are performed to determine the effect processing and microstructure have on bulk metallic glass matrix composites (BMGMCs). Samples are suction cast, machined from commercial plates, and semi-solidly forged (SSF). The SSF specimens have been found to have the highest impact toughness due to the coarsening of the dendrites, which occurs during the semi-solid processing stages. Ductile to brittle transition (DTBT) temperatures are measured for a BMGMC. While at room temperature the BMGMC is highly toughened compared to a fully glassy alloy, it undergoes a DTBT by 250 K. At this point, its impact toughness mirrors that of the constituent glassy matrix. In the following chapter, BMGMCs are shown to have the capability of being capacitively welded to form single, monolithic structures. Shear measurements are performed across welded samples, and, at sufficient weld energies, are found to retain the strength of the parent alloy. Cross-sections are inspected via SEM and no visible crystallization of the matrix occurs.

Next, metallic glasses and BMGMCs are formed into sheets and eggbox structures are tested in hypervelocity impacts. Metallic glasses are ideal candidates for protection against micrometeorite orbital debris due to their high hardness and relatively low density. A flat single layer, flat BMG is compared to a BMGMC eggbox and the latter creates a more diffuse projectile cloud after penetration. A three tiered eggbox structure is also tested by firing a 3.17 mm aluminum sphere at 2.7 km/s at it. The projectile penetrates the first two layers, but is successfully contained by the third.

A large series of metallic glass alloys are created and their wear loss is measured in a pin on disk test. Wear is found to vary dramatically among different metallic glasses, with some considerably outperforming the current state-of-the-art crystalline material (most notably Cu₄₃Zr₄₃Al₇Be₇). Others, on the other hand, suffered extensive wear loss. Commercially available Vitreloy 1 lost nearly three times as much mass in wear as alloy prepared in a laboratory setting. No conclusive correlations can be found between any set of mechanical properties (hardness, density, elastic, bulk, or shear modulus, Poisson’s ratio, frictional force, and run in time) and wear loss. Heat treatments are performed on Vitreloy 1 and Cu₄₃Zr₄₃Al₇Be₇. Anneals near the glass transition temperature are found to increase hardness slightly, but decrease wear loss significantly. Crystallization of both alloys leads to dramatic increases in wear resistance. Finally, wear tests under vacuum are performed on the two alloys above. Vitreloy 1 experiences a dramatic decrease in wear loss, while Cu₄₃Zr₄₃Al₇Be₇ has a moderate increase. Meanwhile, gears are fabricated through three techniques: electrical discharge machining of 1 cm by 3 mm cylinders, semisolid forging, and copper mold suction casting. Initial testing finds the pin on disk test to be an accurate predictor of wear performance in gears.

The final chapter explores an exciting technique in the field of additive manufacturing. Laser engineered net shaping (LENS) is a method whereby small amounts of metallic powders are melted by a laser such that shapes and designs can be built layer by layer into a final part. The technique is extended to mixing different powders during melting, so that compositional gradients can be created across a manufactured part. Two compositional gradients are fabricated and characterized. Ti 6Al¬ 4V to pure vanadium was chosen for its combination of high strength and light weight on one end, and high melting point on the other. It was inspected by cross-sectional x-ray diffraction, and only the anticipated phases were present. 304L stainless steel to Invar 36 was created in both pillar and as a radial gradient. It combines strength and weldability along with a zero coefficient of thermal expansion material. Only the austenite phase is found to be present via x-ray diffraction. Coefficient of thermal expansion is measured for four compositions, and it is found to be tunable depending on composition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.

However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.

This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.

In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.

The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the cell, the binding of proteins to specific sequences of double helical DNA is essential for controlling the processes of protein synthesis (at the level of DNA transcription) and cell proliferation (at the level of DNA replication). In the laboratory, the sequence-specific DNA binding/cleaving properties of restriction endonuclease enzymes (secreted by microorganisms to protect them from foreign DNA molecules) have helped to fuel a revolution in molecular biology. The strength and specificity of a protein:DNA interaction depend upon structural features inherent to the protein and DNA sequences, but it is now appreciated that these features (and therefore protein:DNA complexation) may be altered (regulated) by other protein:DNA complexes, or by environmental factors such as temperature or the presence of specific organic molecules or inorganic ions. It is also now appreciated that molecules much smaller than proteins (including antibiotics of molecular weight less than 2000 and oligonucleotides) can bind to double-helical DNA in sequence-specific fashion. Elucidation of structural motifs and microscopic interactions responsible for the specific molecular recognition of DNA leads to greater understanding of natural processes and provides a basis for the design of novel sequence-specific DNA binding molecules. This thesis describes the synthesis and DNA binding/cleaving characteristics of molecules designed to probe structural, stereochemical, and environmental factors that regulate sequence-specific DNA recognition.

Chapter One introduces the DNA minor groove binding antibiotics Netropsin and Distamycin A, which are di- and tri(N-methylpyrrolecarboxamide) peptides, respectively. The method of DNA affinity cleaving, which has been employed to determine DNA binding properties of designed synthetic molecules is described. The design and synthesis of a series of Netropsin dimers linked in tail-to-tail fashion (by oxalic, malonic, succinic, or fumaric acid), or in head-to-tail fashion (by glycine, β-alanine, and γ-aminobutanoic acid (Gaba)) are presented. These Bis(Netropsin)s were appended with the iron-chelating functionality EDTA in order to make use of the technique of DNA affinity cleaving. Bis(Netropsin)-EDTA compounds are analogs of penta(N-methylpyrrolecarboxamide)-EDTA (P5E), which may be considered a head-to-tail Netropsin dimer linked by Nmethylpyrrolecarboxamide. Low- and high-resolution analysis of pBR322 DNA affinity cleaving by the iron complexes of these molecules indicated that small changes in the length and nature of the linker had significant effects on DNA binding/cleaving efficiency (a measure of DNA binding affinity). DNA binding/cleaving efficiency was found to decrease with changes in the linker in the order β-alanine > succinamide > fumaramide > N-methylpyrrolecarboxamide > malonamide >glycine, γ-aminobutanamide > oxalamide. In general, the Bis(Netropsin)-EDTA:Fe compounds retained the specificity for seven contiguous A:T base pairs characteristic of P5E:Fe binding. However, Bis(Netropsin)Oxalamide- EDTA:Fe exhibited decreased specificity for A:T base pairs, and Bis(Netropsin)-Gaba-EDT A:Fe exhibited some DNA binding sites of less than seven base pairs. Bis(Netropsin)s linked with diacids have C2-symmmetrical DNA binding subunits and exhibited little DNA binding orientation preference. Bis(Netropsin)s linked with amino acids lack C2-symmetrical DNA binding subunits and exhibited higher orientation preferences. A model for the high DNA binding orientation preferences observed with head-to-tail DNA minor groove binding molecules is presented.

Chapter Two describes the design, synthesis, and DNA binding properties of a series of chiral molecules: Bis(Netropsin)-EDTA compounds with linkers derived from (R,R)-, (S,S)-, and (RS,SR)-tartaric acids, (R,R)-, (S,S)-, and (RS,SR)-tartaric acid acetonides, (R)- and (S)-malic acids, N ,N-dimethylaminoaspartic acid, and (R)- and (S)-alanine, as well as three constitutional isomers in which an N-methylpyrrolecarboxamide (P1) subunit and a tri(N-methylpyrrolecarboxamide)-EDTA (P3-EDTA) subunit were linked by succinic acid, (R ,R)-, and (S ,S)-tartaric acids. DNA binding/cleaving efficiencies among this series of molecules and the Bis(Netropsin)s described in Chapter One were found to decrease with changes in the linker in the order β-alanine > succinamide > P1-succinamide-P3 > fumaramide > (S)-malicamide > N-methylpyrrolecarboxamide > (R)-malicamide > malonamide > N ,N-dimethylaminoaspanamide > glycine = Gaba = (S,S)-tartaramide = P1-(S,S)-tanaramide-P3 > oxalamide > (RS,SR)-tartaramide = P1- (R,R)-tanaramide-P3 > (R,R)-tartaramide (no sequence-specific DNA binding was detected for Bis(Netropsin)s linked by (R)- or (S)-alanine or by tartaric acid acetonides). The chiral molecules retained DNA binding specificity for seven contiguous A:T base pairs. From the DNA affinity cleaving data it could be determined that: 1) Addition of one or two substituents to the linker of Bis(Netropsin)-Succinamide resulted in stepwise decreases in DNA binding affinity; 2) molecules with single hydroxyl substituents bound DNA more strongly than molecules with single dimethylamino substituents; 3) hydroxyl-substituted molecules of (S) configuration bound more strongly to DNA than molecules of (R) configuration. This stereochemical regulation of DNA binding is proposed to arise from the inherent right-handed twist of (S)-enantiomeric Bis(Netropsin)s versus the inherent lefthanded twist of (R)-enantiomeric Bis(Netropsin)s, which makes the (S)-enantiomers more complementary to the right-handed twist of B form DNA.

Chapter Three describes the design and synthesis of molecules for the study of metalloregulated DNA binding phenomena. Among a series of Bis(Netropsin)-EDTA compounds linked by homologous tethers bearing four, five, or six oxygen atoms, the Bis(Netropsin) linked by a pentaether tether exhibited strongly enhanced DNA binding/cleaving in the presence of strontium or barium cations. The observed metallospecificity was consistent with the known affinities of metal cations for the cyclic hexaether 18-crown-6 in water. High-resolution DNA affinity cleaving analysis indicated that DNA binding by this molecule in the presence of strontium or barium was not only stronger but of different sequence-specificity than the (weak) binding observed in the absence of metal cations. The metalloregulated binding sites were consistent with A:T binding by the Netropsin subunits and G:C binding by a strontium or barium:pentaether complex. A model for the observed positive metalloregulation and novel sequence-specificity is presented. The effects of 44 different cations on DNA affinity cleaving by P5E:Fe were examined. A series of Bis(Netropsin)-EDTA compounds linked by tethers bearing two, three, four, or five amino groups was also synthesized. These molecules exhibited strong and specific binding to A:T rich regions of DNA. It was found that the iron complexes of these molecules bound and cleaved DNA most efficiently at pH 6.0-6.5, while P5E:Fe bound and cleaved most efficiently at pH 7.5-8.0. Incubating the Bis(Netropsin) Polyamine-EDTA:Fe molecules with K2PdCl4 abolished their DNA binding/cleaving activity. It is proposed that the observed negative metalloregulation arises from kinetically inert Bis(Netropsin) Polyamine:Pd(II) complexes or aggregates, which are sterically unsuitable for DNA complexation. Finally, attempts to produce a synthetic metalloregulated DNA binding protein are described. For this study, five derivatives of a synthetic 52 amino acid residue DNA binding/cleaving protein were produced. The synthetic mutant proteins carried a novel pentaether ionophoric amino acid residue at different positions within the primary sequence. The proteins did not exhibit significant DNA binding/cleaving activity, but they served to illustrate the potential for introducing novel amino acid residues within DNA binding protein sequences, and for the development of the tricyclohexyl ester of EDTA as a superior reagent for the introduction of EDT A into synthetic proteins.

Chapter Four describes the discovery and characterization of a new DNA binding/cleaving agent, [SalenMn(III)]OAc. This metal complex produces single- and double-strand cleavage of DNA, with specificity for A:T rich regions, in the presence of oxygen atom donors such as iodosyl benzene, hydrogen peroxide, or peracids. Maximal cleavage by [SalenMn(III)]OAc was produced at pH 6-7. A comparison of DNA singleand double-strand cleavage by [SalenMn(III)]+ and other small molecules (Methidiumpropyl-EDTA:Fe, Distamycin-EDTA:Fe, Neocarzinostatin, Bleomycin:Fe) is presented. It was found that DNA cleavage by [SalenMn(III)]+ did not require the presence of dioxygen, and that base treatment of DNA subsequent to cleavage by [SalenMn(III)]+ afforded greater cleavage and alterations in the cleavage patterns. Analysis of DNA products formed upon DNA cleavage by [SalenMn(III)] indicated that cleavage was due to oxidation of the sugar-phosphate backbone of DNA. Several mechanisms consistent with the observed products and reaction requirements are discussed.

Chapter Five describes progress on some additional studies. In one study, the DNA binding/cleaving specificities of Distamycin-EDTA derivatives bearing pyrrole N-isopropyl substituents were found to be the same as those of derivatives bearing pyrrole N-methyl substituents. In a second study, the design of and synthetic progress towards a series of nucleopeptide activators of transcription are presented. Five synthetic plasmids designed to test for activation of in vitro run-off transcription by DNA triple helix-forming oligonucleotides or nucleopeptides are described.

Chapter Six contains the experimental documentation of the thesis work.