16 resultados para second position

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reactivity of permethylzirconocene and permethylhafnocene complexes with various nucleophiles has been investigated. Permethylzirconocene reacts with sterically hindered ketenes and allenes to afford metallacycle products. Reaction of these cummulenes with permethylzirconocene hydride complexes affords enolate and σ-allyl species, respectively. Reactions which afford enolate products are nonstereospecific, whereas reactions which afford allyl products initially give a cis-σ-allyl complex which rearranges to its trans isomer. The mechanism of these reactions is proposed to occur either by a Lewis Acid-Lewis Base interaction (ketenes) or by formation of a π-olefin intermediate (allenes).

Permethylzirconocene haloacyl complexes react with strong bases such as lithium diisopropylamide or methylene trimethylphosphorane to afford ketene compounds. Depending on the size of the alkyl ketene substituent, the hydrogenation of these compounds affords enolate-hydride products with varying degrees of stereoselectivity. The larger the substituent, the greater is the selectivity for cis hydrogenation products.

The reaction of permethylzirconocene dihydride and permethylhafnocene dihydride with methylene trimethylphosphorane affords methyl-hydride and dimethyl derivatives. Under appropriate conditions, the metallated-ylide complex 1, (η^5-C_5(CH_3)_5)_2 Zr(H)CH_2PMe_2CH_2, is also obtained and has been structurally characterized by X-ray diffraction techniques. Reaction of 1 with CO affords (η^5-C_5(CH_3)_5)_2 Zr(C,O-η^2 -(PMe_3)HC=CO)H which exists in solution as an equilibrium mixture of isomers. In one isomer (2), the η^2-acyl oxygen atom occupies a lateral equatorial coordination position about zirconium, whereas in the other isomer (3), the η-acyl oxygen atom occupies the central equatorial position. The equilibrium kinetics of the 2→3 isomerization have been studied and the structures of both complexes confirmed by X-ray diffraction methods. These studies suggest a mechanism for CO insertion into metal-carbon bonds of the early transition metals.

Permethylhafnocene dihydride and permethylzirconocene hydride complexes react with diazoalkanes to afford η^2-N, N' -hydrazonido species in which the terminal nitrogen atom of the diazoalkane molecule has inserted into a metal-hydride or metal-carbon bond. The structure of one of these compounds, Cp*_2Zr(NMeNCTol_2)OH, has been determined by X-ray diffraction techniques. Under appropriate conditions, the hydrazonido-hydride complexes react with a second equivalent of diazoalkene to afford η' -N-hydrazonido-η^2-N, N' -hydrazonido species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurons in the primate lateral intraparietal area (area LIP) carry visual, saccade-related and eye position activities. The visual and saccade activities are anchored in a retinotopic framework and the overall response magnitude is modulated by eye position. It was proposed that the modulation by eye position might be the basis of a distributed coding of target locations in a head-centered space. Other recording studies demonstrated that area LIP is involved in oculomotor planning. These results overall suggest that area LIP transforms sensory information for motor functions. In this thesis I further explore the role of area LIP in processing saccadic eye movements by observing the effects of reversible inactivation of this area. Macaque monkeys were trained to do visually guided and memory saccades and a double saccade task to examine the use of eye position signal. Finally, by intermixing visual saccades with trials in which two targets were presented at opposite sides of the fixation point, I examined the behavior of visual extinction.

In chapter 2, I will show that lesion of area LIP results in increased latency of contralesional visual and memory saccades. Contralesional memory saccades are also hypometric and slower in velocity. Moreover, the impairment of memory saccades does not vary with the duration of the delay period. This suggests that the oculomotor deficits observed after inactivation of area LIP is not due to the disruption of spatial memory.

In chapter 3, I will show that lesion of area LIP does not severely affect the processing of spontaneous eye movement. However, the monkeys made fewer contralesional saccades and tended to confine their gaze to the ipsilesional field after inactivation of area LIP. On the other hand, lesion of area LIP results in extinction of the contralesional stimulus. When the initial fixation position was varied so that the retinal and spatial locations of the targets could be dissociated, it was found that the extinction behavior could best be described in a head-centered coordinate.

In chapter 4, I will show that inactivation of area LIP disrupts the use of eye position signal to compute the second movement correctly in the double saccade task. If the first saccade steps into the contralesional field, the error rate and latency of the second saccade are both increased. Furthermore, the direction of the first eye movement largely does not have any effect on the impairment of the second saccade. I will argue that this study provides important evidence that the extraretinal signal used for saccadic localization is eye position rather than a displacement vector.

In chapter 5, I will demonstrate that in parietal monkeys the eye drifts toward the lesion side at the end of the memory saccade in darkness. This result suggests that the eye position activity in the posterior parietal cortex is active in nature and subserves gaze holding.

Overall, these results further support the view that area LIP neurons encode spatial locations in a craniotopic framework and is involved in processing voluntary eye movements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design, synthesis, and characterization of two novel metalloprotein motifs is presented. The first project involved the design and construction of a protein motif which was programmed to form a tetradentate metal complex upon the addition of metal cations. The overall structure of the motif was based on a ββ super-secondary structure consisting of a flexible peptide sequence flanked by metal binding regions located at the carboxy and amino termini. The metal binding region near the amino terminus was constructed from a reverse turn motif with two metal ligating residues, (2R, 3R)-β-methyl-cysteine and histidine. Selection of the peptide sequence for this region was based on the conformational analysis of a series of tetrapeptides designed to form reverse turns in solution.

The stereospecific syntheses of a series of novel bipyridyl- and phenanthrolylsubstituted amino acids was carried out to provide ligands for the carboxy terminus metal binding region. These residues were incorporated into peptide sequences using solid phase peptide synthesis protocols, and metal binding studies indicated that the metal binding properties of these ligands was dictated by the specific regioisomer of the heteroaromatic ring and the peptide primary sequence.

Finally, a peptide containing optimized components for the metal binding regions was prepared to test the ability of the compound to form the desired intramolecular peptide:metal cation complexes. Metal binding studies demonstrated that the peptide formed monomeric complexes with very high metal cation binding affinities and that the two metal binding regions act cooperatively in the metal binding process. The use of these systems in the design of proteins capable of regulating naturally occurring proteins is discussed.

The second project involved the semisynthesis of two horse heart cytochrome c mutants incorporating the bipyridyl-amino acids at position 72 of the protein sequence. Structural studies on the proteins indicated that the bipyridyl amino acids had a neglible effect on the protein structure. One of the mutants was modified with Ru(bpy)_2^(+2) to form a redox-active protein, and the modified protein was found to have enhanced electron transfer properties between the heme and the introduced metal site.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because so little is known about the structure of membrane proteins, an attempt has been made in this work to develop techniques by which to model them in three dimensions. The procedures devised rely heavily upon the availability of several sequences of a given protein. The modelling procedure is composed of two parts. The first identifies transmembrane regions within the protein sequence on the basis of hydrophobicity, β-turn potential, and the presence of certain amino acid types, specifically, proline and basic residues. The second part of the procedure arranges these transmembrane helices within the bilayer based upon the evolutionary conservation of their residues. Conserved residues are oriented toward other helices and variable residues are positioned to face the surrounding lipids. Available structural information concerning the protein's helical arrangement, including the lengths of interhelical loops, is also taken into account. Rhodopsin, band 3, and the nicotinic acetylcholine receptor have all been modelled using this methodology, and mechanisms of action could be proposed based upon the resulting structures.

Specific residues in the rhodopsin and iodopsin sequences were identified, which may regulate the proteins' wavelength selectivities. A hinge-like motion of helices M3, M4, and M5 with respect to the rest of the protein was proposed to result in the activation of transducin, the G-protein associated with rhodopsin. A similar mechanism is also proposed for signal transduction by the muscarinic acetylcholine and β-adrenergic receptors.

The nicotinic acetylcholine receptor was modelled with four trans-membrane helices per subunit and with the five homologous M2 helices forming the cation channel. Putative channel-lining residues were identified and a mechanism of channel-opening based upon the concerted, tangential rotation of the M2 helices was proposed.

Band 3, the anion exchange protein found in the erythrocyte membrane, was modelled with 14 transmembrane helices. In general the pathway of anion transport can be viewed as a channel composed of six helices that contains a single hydrophobic restriction. This hydrophobic region will not allow the passage of charged species, unless they are part of an ion-pair. An arginine residue located near this restriction is proposed to be responsible for anion transport. When ion-paired with a transportable anion it rotates across the barrier and releases the anion on the other side of the membrane. A similar process returns it to its original position. This proposed mechanism, based on the three-dimensional model, can account for the passive, electroneutral, anion exchange observed for band 3. Dianions can be transported through a similar mechanism with the additional participation of a histidine residue. Both residues are located on M10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The warm plasma resonance cone structure of the quasistatic field produced by a gap source in a bounded magnetized slab plasma is determined theoretically. This is initially determined for a homogeneous or mildly inhomogeneous plasma with source frequency lying between the lower hybrid frequency and the plasma frequency. It is then extended to the complicated case of an inhomogeneous plasma with two internal lower hybrid layers present, which is of interest to radio frequency heating of plasmas.

In the first case, the potential is obtained as a sum of multiply reflected warm plasma resonance cones, each of which has a similar structure, but a different size, amplitude, and position. An important interference between nearby multiply-reflected resonance cones is found. The cones are seen to spread out as they move away from the source, so that this interference increases and the individual resonance cones become obscured far away from the source.

In the second case, the potential is found to be expressible as a sum of multiply-reflected, multiply-tunnelled, and mode converted resonance cones, each of which has a unique but similar structure. The effects of both collisional and collisionless damping are included and their effects on the decay of the cone structure studied. Various properties of the cones such as how they move into and out of the hybrid layers, through the evanescent region, and transform at the hybrid layers are determined. It is found that cones can tunnel through the evanescent layer if the layer is thin, and the effect of the thin evanescent layer is to subdue the secondary maxima of cone relative to the main peak, while slightly broadening the main peak and shifting it closer to the cold plasma cone line.

Energy theorems for quasistatic fields are developed and applied to determine the power flow and absorption along the individual cones. This reveals the points of concentration of the flow and the various absorption mechanisms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A description is given of experimental work on the damping of a second order electron plasma wave echo due to velocity space diffusion in a low temperature magnetoplasma. Sufficient precision was obtained to verify the theoretically predicted cubic rather than quadratic or quartic dependence of the damping on exciter separation. Compared to the damping predicted for Coulomb collisions in a thermal plasma in an infinite magnetic field, the magnitude of the damping was approximately as predicted, while the velocity dependence of the damping was weaker than predicted. The discrepancy is consistent with the actual non-Maxwellian electron distribution of the plasma.

In conjunction with the damping work, echo amplitude saturation was measured as a function of the velocity of the electrons contributing to the echo. Good agreement was obtained with the predicted J1 Bessel function amplitude dependence, as well as a demonstration that saturation did not influence the damping results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.

HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.

The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolution (σm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of meso-phenyloctamethylporphyrins covalently bonded at the 4'phenyl position to quinones via rigid bicyclo[2.2.2]octane spacers were synthesized for the study of the dependence of electron transfer reaction rate on solvent, distance, temperature, and energy gap. A general and convergent synthesis was developed based on the condensation of ac-biladienes with masked quinonespacer-benzaldehydes. From picosecond fluorescence spectroscopy emission lifetimes were measured in seven solvents of varying polarity. Rate constants were determined to vary from 5.0x109sec-1 in N,N-dimethylformamide to 1.15x1010 Sec-1 in benzene, and were observed to rise at most by about a factor of three with decreasing solvent polarity. Experiments at low temperature in 2-MTHF glass (77K) revealed fast, nearly temperature-independent electron transfer characterized by non-exponential fluorescence decays, in contrast to monophasic behavior in fluid solution at 298K. This example evidently represents the first photosynthetic model system not based on proteins to display nearly temperature-independent electron transfer at high temperatures (nuclear tunneling). Low temperatures appear to freeze out the rotational motion of the chromophores, and the observed nonexponential fluorescence decays may be explained as a result of electron transfer from an ensemble of rotational conformations. The nonexponentiality demonstrates the sensitivity of the electron transfer rate to the precise magnitude of the electronic matrix element, which supports the expectation that electron transfer is nonadiabatic in this system. The addition of a second bicyclooctane moiety (15 Å vs. 18 Å edge-to-edge between porphyrin and quinone) reduces the transfer rate by at least a factor of 500-1500. Porphyrinquinones with variously substituted quinones allowed an examination of the dependence of the electron transfer rate constant κET on reaction driving force. The classical trend of increasing rate versus increasing exothermicity occurs from 0.7 eV≤ |ΔG0'(R)| ≤ 1.0 eV until a maximum is reached (κET = 3 x 108 sec-1 rising to 1.15 x 1010 sec-1 in acetonitrile). The rate remains insensitive to ΔG0 for ~ 300 mV from 1.0 eV≤ |ΔG0’(R)| ≤ 1.3 eV, and then slightly decreases in the most exothermic case studied (cyanoquinone, κET = 5 x 109 sec-1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.

This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.

A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.

This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermodynamical fluctuations in temperature and position exist in every physical system, and show up as a fundamental noise limit whenever we choose to measure some quantity in a laboratory environment. Thermodynamical fluctuations in the position of the atoms in the dielectric coatings on the mirrors for optical cavities at the forefront of precision metrology (e.g., LIGO, the cavities which probe atomic transitions to define the second) are a current limiting noise source for these experiments, and anything which involves locking a laser to an optical cavity. These thermodynamic noise sources scale physical geometry of experiment, material properties (such as mechanical loss in our dielectric coatings), and temperature. The temperature scaling provides a natural motivation to move to lower temperatures, with a potential huge benefit for redesigning a room temperature experiment which is limited by thermal noise for cryogenic operation.

We design, build, and characterize a pair of linear Fabry-Perot cavities to explore limitations to ultra low noise laser stabilization experiments at cryogenic temperatures. We use silicon as the primary material for the cavity and mirrors, due to a zero crossing in its linear coefficient of thermal expansion (CTE) at 123 K, and other desirable material properties. We use silica tantala coatings, which are currently the best for making high finesse low noise cavities at room temperature. The material properties of these coating materials (which set the thermal noise levels) are relatively unknown at cryogenic temperatures, which motivates us to study them at these temperatures. We were not able to measure any thermal noise source with our experiment due to excess noise. In this work we analyze the design and performance of the cavities, and recommend a design shift from mid length cavities to short cavities in order to facilitate a direct measurement of cryogenic coating noise.

In addition, we measure the cavities (frequency dependent) photo-thermal response. This can help characterize thermooptic noise in the coatings, which is poorly understood at cryogenic temperatures. We also explore the feasibility of using the cavity to do macroscopic quantum optomechanics such as ground state cooling.