11 resultados para ACCELERATED ATHEROSCLEROSIS
em CaltechTHESIS
Resumo:
The alkali metal salts of 1,5-hexadien-3-ols undergo accelerated Cope rearrangements to the enolates of δ, ε-unsaturated carbonyl compounds. The generality of the rearrangement was investigated in numerous systems, particularly acyclic cases, and the effect of changes in substituents, counterions, solvents, and geometrical structures were noted and discussed. Applications of this methodology in synthesis included the synthesis of the insect pheromone frontalin, the preparation of selectively monoprotected 1,6-dicarbonyl compounds from 4-methoxy- and 4-phenylthio-1,5-hexadien-3-ols, and the construction of complex ring structures such as a D-homo-estratetraenone derivative.
Thermochemical estimates of the energetics of anionpromoted alkoxide fragmentations were made, and in all cases heterolytic cleavage was favored over hemolytic cleavage by 8.5-53 kcal/mol. The implication of these and other thermochemical estimates is that the anionic oxy-Cope rearrangement occurs via a concerted mechanism rather than a dissociation-recombination process. The concepts of anion-induced bond weakening were successfully applied to an accelerated [1,3]-shift of a dithiane fragment in a cyclohexenyl system. Trapping experiments demonstrated that > 85% of the [1,3]-shift occurred within a solvent cage. Attempts at promoting an intramolecular ene reaction using the potassium salts of 2,7-octadien-1-o1 and 2,8-nonadien-1-o1 were unsuccessful. A general review of anion-promoted bond reorganizations and anion substituent effects is also presented.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.
Resumo:
The degeneration of the outer retina usually causes blindness by affecting the photoreceptor cells. However, the ganglion cells, which consist of optic nerves, on the middle and inner retina layers are often intact. The retinal implant, which can partially restore vision by electrical stimulation, soon becomes a focus for research. Although many groups worldwide have spent a lot of effort on building devices for retinal implant, current state-of-the-art technologies still lack a reliable packaging scheme for devices with desirable high-density multi-channel features. Wireless flexible retinal implants have always been the ultimate goal for retinal prosthesis. In this dissertation, the reliable packaging scheme for a wireless flexible parylene-based retinal implants has been well developed. It can not only provide stable electrical and mechanical connections to the high-density multi-channel (1000+ channels on 5 mm × 5 mm chip area) IC chips, but also survive for more than 10 years in the human body with corrosive fluids.
The device is based on a parylene-metal-parylene sandwich structure. In which, the adhesion between the parylene layers and the metals embedded in the parylene layers have been studied. Integration technology for high-density multi-channel IC chips has also been addressed and tested with dummy and real 268-channel and 1024-channel retinal IC chips. In addition, different protection schemes have been tried in application to IC chips and discrete components to gain the longest lifetime. The effectiveness has been confirmed by the accelerated and active lifetime soaking test in saline solution. Surgical mockups have also been designed and successfully implanted inside dog's and pig's eyes. Additionally, the electrodes used to stimulate the ganglion cells have been modified to lower the interface impedance and shaped to better fit the retina. Finally, all the developed technologies have been applied on the final device with a dual-metal-layer structure.
Resumo:
Evidence for the stereochemical isomerization of a variety of ansa metallocene compounds is presented. For the scandocene allyl derivatives described here, we have established that the process is promoted by a variety of salts in both ether and hydrocarbon solvents and is not accelerated by light. A plausible mechanism based on an earlier proposal by Marks, et al., is offered as an explanation of this process. It involves coordination of anions and/or donor solvents to the metal center with cation assistance to encourage metalcyclopentadienyl bond heterolysis, rotation about the Si-Cp bond of the detached cyclopentadienide and recoordination of the opposite face. Our observations in some cases of thermodynamic racemic:meso ratios under the reaction conditions commonly used for the synthesis of the metallocene chlorides suggests that the interchange is faster than metallation, such that the composition of the reaction mixture is determined by thermodynamic, not kinetic, control in these cases.
Two new ansa-scandocene alkenyl compounds react with olefins resulting in the formation of η3-allyl complexes. Kinetics and labeling experiments indicate a tuck-in intermediate on the reaction pathway; in this intermediate the metal is bound to the carbon adjacent to the silyllinker in the rear of the metallocene wedge. In contrast, reaction of permethylscandocene alkenyl compounds with olefins results, almost exclusively, in vinylic C-H bond activation. It is proposed that relieving transition state steric interactions between the cyclopentadienyl rings and the olefin by either linking the rings together or using a larger lanthanide metal may allow for olefin coordination, stabilizing the transition state for allylic σ-bond metathesis.
A selectively isotopically labeled propylene, CH2CD(13CH3), was synthesized and its polymerization was carried out at low concentration in toluene solution using isospecific metallocene catalysts. Analysis of the NMR spectra (13C, 1H, and 2H) of the resultant polymers revealed that the production of stereoerrors through chain epimerization proceeds exclusively by the tertiaryalkyl mechanism. Additionally, enantiofacial inversion of the terminally unsaturated polymer chain occurs by a non-dissociative process. The implications of these results on the mechanism of olefin polymerization with these catalysts is discussed.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Deficiencies in the mismatch repair (MMR) pathway are associated with several types of cancers, as well as resistance to commonly used chemotherapeutics. Rhodium metalloinsertors have been found to bind DNA mismatches with high affinity and specificity in vitro, and also exhibit cell-selective cytotoxicity, targeting MMR-deficient cells over MMR-proficient cells.
Here we examine the biological fate of rhodium metalloinsertors bearing dipyridylamine ancillary ligands. These complexes are shown to exhibit accelerated cellular uptake which permits the observation of various cellular responses, including disruption of the cell cycle and induction of necrosis, which occur preferentially in the MMR-deficient cell line. These cellular responses provide insight into the mechanisms underlying the selective activity of this novel class of targeted anti-cancer agents.
In addition, ten distinct metalloinsertors with varying lipophilicities are synthesized and their mismatch binding affinities and biological activities studied. While they are found to have similar binding affinities, their cell-selective antiproliferative and cytotoxic activities vary significantly. Inductively coupled plasma mass spectrometry (ICP-MS) experiments show that all of these metalloinsertors localize in the nucleus at sufficient concentrations for binding to DNA mismatches. Furthermore, metalloinsertors with high rhodium localization in the mitochondria show toxicity that is not selective for MMR-deficient cells. This work supports the notion that specific targeting of the metalloinsertors to nuclear DNA gives rise to their cytotoxic and antiproliferative activities that are selective for cells deficient in MMR.
To explore further the basis of the unique selectivity of the metlloinsertors in targeting MMR-deficient cells, experiments were conducted using engineered NCI-H23 lung adenocarcinoma cells that contain a doxycycline-inducible shRNA which suppresses the expression of the MMR gene MLH1. Here we use this new cell line to further validate rhodium metalloinsertors as compounds capable of differentially inhibiting the proliferation of MMR-deficient cancer cells over isogenic MMR-proficient cells. General DNA damaging agents, such as cisplatin and etoposide, in contrast, are less effective in the induced cell line defective in MMR.
Finally, we describe a new subclass of metalloinsertors with enhanced potency and selectivity, in which the complexes show Rh-O coordination. In particular, it has been found that both Δ and Λ enantiomers of [Rh(chrysi)(phen)(DPE)]2+ bind to DNA with similar affinities, suggesting a possible different binding conformation than previous metalloinsertors. Remarkably, all members of this new family of compounds have significantly increased potency in a range of cellular assays; indeed, all are more potent than the FDA-approved anticancer drugs cisplatin and MNNG. Moreover, these activities are coupled with high levels of selectivity for MMR-deficient cells.
Resumo:
Protein structure prediction has remained a major challenge in structural biology for more than half a century. Accelerated and cost efficient sequencing technologies have allowed researchers to sequence new organisms and discover new protein sequences. Novel protein structure prediction technologies will allow researchers to study the structure of proteins and to determine their roles in the underlying biology processes and develop novel therapeutics.
Difficulty of the problem stems from two folds: (a) describing the energy landscape that corresponds to the protein structure, commonly referred to as force field problem; and (b) sampling of the energy landscape, trying to find the lowest energy configuration that is hypothesized to be the native state of the structure in solution. The two problems are interweaved and they have to be solved simultaneously. This thesis is composed of three major contributions. In the first chapter we describe a novel high-resolution protein structure refinement algorithm called GRID. In the second chapter we present REMCGRID, an algorithm for generation of low energy decoy sets. In the third chapter, we present a machine learning approach to ranking decoys by incorporating coarse-grain features of protein structures.
Resumo:
The isotopic composition of the enhanced low energy nitrogen and oxygen cosmic rays can provide information regarding the source of these particles. Using the Caltech Electron/Isotope Spectrometer aboard the IMP-7 satellite, a measurement of this isotopic composition was made. To determine the isotope response of the instrument, a calibration was performed, and it was determined that the standard range-energy tables were inadequate to calculate the isotope response. From the calibration, corrections to the standard range-energy tables were obtained which can be used to calculate the isotope response of this and similar instruments.
The low energy nitrogen and oxygen cosmic rays were determined to be primarily ^(14)N and ^(16)O. Upper limits were obtained for the abundances of the other stable nitrogen and oxygen isotopes. To the 84% confidence level the isotopic abundances are: ^(15)N/N ≤ 0.26 (5.6- 12.7 MeV/nucleon), ^(17)0/0 ≤ 0.13 (7.0- 11.8 MeV/nucleon), (18)0/0 ≤ 0.12 (7.0 - 11.2 MeV/nucleon). The nitrogen composition differs from higher energy measurements which indicate that ^(15)N, which is thought to be secondary, is the dominant isotope. This implies that the low energy enhanced cosmic rays are not part of the same population as the higher energy cosmic rays and that they have not passed through enough material to produce a large fraction of ^(15)N. The isotopic composition of the low energy enhanced nitrogen and oxygen is consistent with the local acceleration theory of Fisk, Kozlovsky, and Ramaty, in which interstellar material is accelerated to several MeV/nucleon. If, on the other hand, the low energy nitrogen and oxygen result from nucleosynthesis in a galactic source, then the nucleosynthesis processes which produce an enhancement of nitrogen and oxygen and a depletion of carbon are restricted to producing predominantly ^(14)N and ^(16)O.
Experimental, Numerical and Analytical Studies of the MHD-driven plasma jet, instabilities and waves
Resumo:
This thesis describes a series of experimental, numerical, and analytical studies involving the Caltech magnetohydrodynamically (MHD)-driven plasma jet experiment. The plasma jet is created via a capacitor discharge that powers a magnetized coaxial planar electrodes system. The jet is collimated and accelerated by the MHD forces.
We present three-dimensional ideal MHD finite-volume simulations of the plasma jet experiment using an astrophysical magnetic tower as the baseline model. A compact magnetic energy/helicity injection is exploited in the simulation analogous to both the experiment and to astrophysical situations. Detailed analysis provides a comprehensive description of the interplay of magnetic force, pressure, and flow effects. We delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms.
When the experimental jet is sufficiently long, it undergoes a global kink instability and then a secondary local Rayleigh-Taylor instability caused by lateral acceleration of the kink instability. We present an MHD theory of the Rayleigh-Taylor instability on the cylindrical surface of a plasma flux rope in the presence of a lateral external gravity. The Rayleigh-Taylor instability is found to couple to the classic current-driven instability, resulting in a new type of hybrid instability. The coupled instability, produced by combination of helical magnetic field, curvature of the cylindrical geometry, and lateral gravity, is fundamentally different from the classic magnetic Rayleigh-Taylor instability occurring at a two-dimensional planar interface.
In the experiment, this instability cascade from macro-scale to micro-scale eventually leads to the failure of MHD. When the Rayleigh-Taylor instability becomes nonlinear, it compresses and pinches the plasma jet to a scale smaller than the ion skin depth and triggers a fast magnetic reconnection. We built a specially designed high-speed 3D magnetic probe and successfully detected the high frequency magnetic fluctuations of broadband whistler waves associated with the fast reconnection. The magnetic fluctuations exhibit power-law spectra. The magnetic components of single-frequency whistler waves are found to be circularly polarized regardless of the angle between the wave propagation direction and the background magnetic field.
Resumo:
The cosmic-ray positron and negatron spectra between 11 and 204 MeV have been measured in a series of 3 high-altitude balloon flights launched from Fort Churchill, Manitoba, on July 16, July 21, and July 29, 1968. The detector system consisted of a magnetic spectrometer utilizing a 1000-gauss permanent magnet, scintillation counters, and a lucite Čerenkov counter.
Launches were timed so that the ascent through the 100 g/cm2 level of residual atmosphere occurred after the evening geomagnetic cutoff transition. Data gathered during ascent are used to correct for the contribution of atmospheric secondary electrons to the flux measured at float altitude. All flights floated near 2.4 g/cm2.
A pronounced morning intensity increase was observed in each flight. We present daytime positron and negatron data which support the interpretation of the diurnal flux variation as a change in the local geomagnetic cutoff. A large diurnal variation was observed in the count rate of positrons and negatrons with magnetic rigidities less than 11 MV and is evidence that the nighttime cutoff was well below this value.
Using nighttime data we derive extraterrestrial positron and negatron spectra. The positron-to-total-electron ratio which we measure indicates that the interstellar secondary, or collision, source contributes ≾50 percent of the electron flux within this energy interval. By comparing our measured positron spectrum with the positron spectrum calculated for the collision source we derive the absolute solar modulation for positrons in 1968. Assuming negligible energy loss during modulation, we derive the total interstellar electron spectrum as well as the spectrum of directly accelerated, or primary, electrons. We examine the effect of adiabatic deceleration and find that many of the conclusions regarding the interstellar electron spectrum are not significantly altered for an assumed energy loss of up to 50 percent of the original energy.