15 resultados para Active testing
em CaltechTHESIS
Resumo:
Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.
This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.
One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.
One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.
Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.
As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.
Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.
We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.
We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.
In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.
Resumo:
Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.
Resumo:
Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.
In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.
Resumo:
Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.
Resumo:
This thesis presents a civil engineering approach to active control for civil structures. The proposed control technique, termed Active Interaction Control (AIC), utilizes dynamic interactions between different structures, or components of the same structure, to reduce the resonance response of the controlled or primary structure under earthquake excitations. The primary control objective of AIC is to minimize the maximum story drift of the primary structure. This is accomplished by timing the controlled interactions so as to withdraw the maximum possible vibrational energy from the primary structure to an auxiliary structure, where the energy is stored and eventually dissipated as the external excitation decreases. One of the important advantages of AIC over most conventional active control approaches is the very low external power required.
In this thesis, the AIC concept is introduced and a new AIC algorithm, termed Optimal Connection Strategy (OCS) algorithm, is proposed. The efficiency of the OCS algorithm is demonstrated and compared with two previously existing AIC algorithms, the Active Interface Damping (AID) and Active Variable Stiffness (AVS) algorithms, through idealized examples and numerical simulations of Single- and Multi-Degree-of Freedom systems under earthquake excitations. It is found that the OCS algorithm is capable of significantly reducing the story drift response of the primary structure. The effects of the mass, damping, and stiffness of the auxiliary structure on the system performance are investigated in parametric studies. Practical issues such as the sampling interval and time delay are also examined. A simple but effective predictive time delay compensation scheme is developed.
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
The 1,3-dipolar cycloadditions of trimethylsilyl diazomethane with camphorsultam-derived acrylates are reported as a means for the efficient synthesis of optically active pyrazolines. Trimethylsilyl diazomethane is a safe, commercially available diazoalkane which provides Δ1-pyrazolines 1n good yield and diastereoselectivity when camphorsultam-derived acrylates are used as the reaction dipolarophiles . These initial cycloadducts are subsequently converted to stable, characterizable Δ2-pyrazolines upon desilylation.
A manifold of reactions that can be applied to these Δ2-pyrazolines has been developed which includes pyrazoline reduction, N-N bond reduction, addition to the pyrazoline C=N by mild carbon nucleophiles, and both solvolytic and reductive chiral auxiliary removal. Additionally, it has been demonstrated that the pyrazoline reduction products can take part in peptide coupling reactions that allow for the pyrazolidines to serve as proline-like molecules. The development of this methodology is a general solution to the problem of highly substituted, functionalized pyrazoline synthesis. Importantly, the pyrazolines thus provided have been demonstrated to be amenable to reactions that add to their value as synthetic intermediates.
Resumo:
This dissertation describes efforts to model biological active sites with small molecule clusters. The approach used took advantage of a multinucleating ligand to control the structure and nuclearity of the product complexes, allowing the study of many different homo- and heterometallic clusters. Chapter 2 describes the synthesis of the multinucleating hexapyridyl trialkoxy ligand used throughout this thesis and the synthesis of trinuclear first row transition metal complexes supported by this framework, with an emphasis on tricopper systems as models of biological multicopper oxidases. The magnetic susceptibility of these complexes were studied, and a linear relation was found between the Cu-O(alkoxide)-Cu angles and the antiferromagnetic coupling between copper centers. The triiron(II) and trizinc(II) complexes of the ligand were also isolated and structurally characterized.
Chapter 3 describes the synthesis of a series of heterometallic tetranuclear manganese dioxido complexes with various incorporated apical redox-inactive metal cations (M = Na+, Ca2+, Sr2+, Zn2+, Y3+). Chapter 4 presents the synthesis of heterometallic trimanganese(IV) tetraoxido complexes structurally related to the CaMn3 subsite of the oxygen-evolving complex (OEC) of Photosystem II. The reduction potentials of these complexes were studied, and it was found that each isostructural series displays a linear correlation between the reduction potentials and the Lewis acidities of the incorporated redox-inactive metals. The slopes of the plotted lines for both the dioxido and tetraoxido clusters are the same, suggesting a more general relationship between the electrochemical potentials of heterometallic manganese oxido clusters and their “spectator” cations. Additionally, these studies suggest that Ca2+ plays a role in modulating the redox potential of the OEC for water oxidation.
Chapter 5 presents studies of the effects of the redox-inactive metals on the reactivities of the heterometallic manganese complexes discussed in Chapters 3 and 4. Oxygen atom transfer from the clusters to phosphines is studied; although the reactivity is kinetically controlled in the tetraoxido clusters, the dioxido clusters with more Lewis acidic metal ions (Y3+ vs. Ca2+) appear to be more reactive. Investigations of hydrogen atom transfer and electron transfer rates are also discussed.
Appendix A describes the synthesis, and metallation reactions of a new dinucleating bis(N-heterocyclic carbene)ligand framework. Dicopper(I) and dicobalt(II) complexes of this ligand were prepared and structurally characterized. A dinickel(I) dichloride complex was synthesized, reduced, and found to activate carbon dioxide. Appendix B describes preliminary efforts to desymmetrize the manganese oxido clusters via functionalization of the basal multinucleating ligand used in the preceding sections of this dissertation. Finally, Appendix C presents some partially characterized side products and unexpected structures that were isolated throughout the course of these studies.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
The first part of this work describes the uses of aperiodic structures in optics and integrated optics. In particular, devices are designed, fabricated, tested and analyzed which make use of a chirped grating corrugation on the surface of a dielectric waveguide. These structures can be used as input-output couplers, multiplexers and demultiplexers, and broad band filters.
Next, a theoretical analysis is made of the effects of a random statistical variation in the thicknesses of layers in a dielectric mirror on its reflectivity properties. Unlike the intentional aperiodicity introduced in the chirped gratings, the aperiodicity in the Bragg reflector mirrors is unintentional and is present to some extent in all devices made. The analysis involved in studying these problems relies heavily on the coupled mode formalism. The results are compared with computer experiments, as well as tests of actual mirrors.
The second part of this work describes a novel method for confining light in the transverse direction in an injection laser. These so-called transverse Bragg reflector lasers confine light normal to the junction plane in the active region, through reflection from an adjacent layered medium. Thus, in principle, it is possible to guide light in a dielectric layer whose index is lower than that of the surrounding material. The design, theory and testing of these diode lasers are discussed.
Resumo:
[no abstract]
Resumo:
This thesis presents a novel active mirror technology based on carbon fiber composites and replication manufacturing processes. Multiple additional layers are implemented into the structure in order to provide the reflective layer, actuation capabilities and electrode routing. The mirror is thin, lightweight, and has large actuation capabilities. These features, along with the associated manufacturing processes, represent a significant change in design compared to traditional optics. Structural redundancy in the form of added material or support structures is replaced by thin, unsupported lightweight substrates with large actuation capabilities.
Several studies motivated by the desire to improve as-manufactured figure quality are performed. Firstly, imperfections in thin CFRP laminates and their effect on post-cure shape errors are studied. Numerical models are developed and compared to experimental measurements on flat laminates. Techniques to mitigate figure errors for thicker laminates are also identified. A method of properly integrating the reflective facesheet onto the front surface of the CFRP substrate is also presented. Finally, the effect of bonding multiple initially flat active plates to the backside of a curved CFRP substrate is studied. Figure deformations along with local surface defects are predicted and characterized experimentally. By understanding the mechanics behind these processes, significant improvements to the overall figure quality have been made.
Studies related to the actuation response of the mirror are also performed. The active properties of two materials are characterized and compared. Optimal active layer thicknesses for thin surface-parallel schemes are determined. Finite element simulations are used to make predictions on shape correction capabilities, demonstrating high correctabiliity and stroke over low-order modes. The effect of actuator saturation is studied and shown to significantly degrade shape correction performance.
The initial figure as well as actuation capabilities of a fully-integrated active mirror prototype are characterized experimentally using a Projected Hartmann test. A description of the test apparatus is presented along with two verification measurements. The apparatus is shown to accurately capture both high-amplitude low spatial-frequency figure errors as well as those at lower amplitudes but higher spatial frequencies. A closed-loop figure correction is performed, reducing figure errors by 94%.
Resumo:
The material included within this report is the result of a series of tests of concrete specimens taken during the construction of various buildings in the cities of Pasadena and Los Angeles over a period of eight months.
The object of the problem is to determine the effect of the water ratio on the ultimate strength of the concrete as obtained from data observed and recorded from specimens taken from actual building practice rather than that from laboratory specimens made under ideal, or at least more nearly standard conditions.
Resumo:
Several patients of P. J. Vogel who had undergone cerebral commissurotomy for the control of intractable epilepsy were tested on a variety of tasks to measure aspects of cerebral organization concerned with lateralization in hemispheric function. From tests involving identification of shapes it was inferred that in the absence of the neocortical commissures, the left hemisphere still has access to certain types of information from the ipsilateral field. The major hemisphere can still make crude differentiations between various left-field stimuli, but is unable to specify exact stimulus properties. Most of the time the major hemisphere, having access to some ipsilateral stimuli, dominated the minor hemisphere in control of the body.
Competition for control of the body between the hemispheres is seen most clearly in tests of minor hemisphere language competency, in which it was determined that though the minor hemisphere does possess some minimal ability to express language, the major hemisphere prevented its expression much of the time. The right hemisphere was superior to the left in tests of perceptual visualization, and the two hemispheres appeared to use different strategies in attempting to solve the problems, namely, analysis for the left hemisphere and synthesis for the right hemisphere.
Analysis of the patients' verbal and performance I.Q.'s, as well as observations made throughout testing, suggest that the corpus callosum plays a critical role in activities that involve functions in which the minor hemisphere normally excels, that the motor expression of these functions may normally come through the major hemisphere by way of the corpus callosum.
Lateral specialization is thought to be an evolutionary adaptation which overcame problems of a functional antagonism between the abilities normally associated with the two hemispheres. The tests of perception suggested that this function lateralized into the mute hemisphere because of an active counteraction by language. This latter idea was confirmed by the finding that left-handers, in whom there is likely to be bilateral language centers, are greatly deficient on tests of perception.
Resumo:
This dissertation describes studies on two multinucleating ligand architectures: the first scaffold was designed to support tricopper complexes, while the second platform was developed to support tri- and tetrametallic clusters.
In Chapter 2, the synthesis of yttrium (and lanthanide) complexes supported by a tripodal ligand framework designed to bind three copper centers in close proximity is described. Tricopper complexes were shown to react with dioxygen in a 1:1 [Cu3]/O2 stoichiometry to form intermediates in which the O–O bond was fully cleaved, as characterized via UV-Vis spectroscopy and determination of the reaction stoichiometry. Pre-arrangement of the three Cu centers was pivotal to cooperative O2 activation, as mono-copper complexes reacted differently with dioxgyen. The reactivity of the observed intermediates was studied with various substrates (reductants, O-atom acceptors, H-atom donors, Brønsted acids) to determine their properties. In Chapter 3, the reactivity of the same yttrium-tricopper complex with nitric oxide was explored. Reductive coupling to form a trans-hyponitrite complex (characterized by X-ray crystallography) was observed via cooperative reactivity by an yttrium and a copper center on two distinct tetrametallic units. The hyponitrite complex was observed to release nitrous oxide upon treatment with a Brønsted acid, supporting its viability as an intermediate in nitric oxide reduction to nitrous oxide.
In Chapter 4, a different multinucleating ligand scaffold was employed to synthesize heterometallic triiron clusters containing one oxide and one hydroxide bridges. The effects of the redox-inactive, Lewis acidic heterometals on redox potential was studied by cyclic voltammetry, unveiling a linear correlation between redox potential and heterometal Lewis acidity. Further studies on these complexes showed that the Lewis acidity of the redox-inactive metals also affected the oxygen-atom transfer reactivity of these clusters. Comparisons of this reactivity with manganese systems, collaborative efforts to reassign the structures of related manganese oxo-hydroxo clusters, and synthetic attempts to access related dioxo clusters are also described.
In Appendix A, ongoing efforts to synthesize new clusters supported by the same multinucleating ligand platform are described. Studies of novel approaches towards ligand exchange in tetrametallic clusters and incorporation of new supporting and bridging ligand motifs in trinuclear complexes are presented.