26 resultados para REACH

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensory-motor circuits course through the parietal cortex of the human and monkey brain. How parietal cortex manipulates these signals has been an important question in behavioral neuroscience. This thesis presents experiments that explore the contributions of monkey parietal cortex to sensory-motor processing, with an emphasis on the area's contributions to reaching. First, it is shown that parietal cortex is organized into subregions devoted to specific movements. Area LIP encodes plans to make saccadic eye movements. A nearby area, the parietal reach region (PRR), plans reaches. A series of experiments are then described which explore the contributions of PRR to reach planning. Reach plans are represented in an eye-centered reference frame in PRR. This representation is shown to be stable across eye movements. When a sequence of reaches is planned, only the impending movement is represented in PRR, showing that the area is more related to movement planning than to storing the memory of reach targets. PRR resembles area LIP in each of these properties: the two areas may provide a substrate for hand-eye coordination. These findings yield new perspectives on the functions of the parietal cortex and on the organization of sensory-motor processing in primate brains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many particles proposed by theories, such as GUT monopoles, nuclearites and 1/5 charge superstring particles, can be categorized as Slow-moving, Ionizing, Massive Particles (SIMPs).

Detailed calculations of the signal-to-noise ratios in vanous acoustic and mechanical methods for detecting such SIMPs are presented. It is shown that the previous belief that such methods are intrinsically prohibited by the thermal noise is incorrect, and that ways to solve the thermal noise problem are already within the reach of today's technology. In fact, many running and finished gravitational wave detection ( GWD) experiments are already sensitive to certain SIMPs. As an example, a published GWD result is used to obtain a flux limit for nuclearites.

The result of a search using a scintillator array on Earth's surface is reported. A flux limit of 4.7 x 10^(-12) cm^(-2)sr^(-1)s^(-1) (90% c.l.) is set for any SIMP with 2.7 x 10^(-4) less than β less than 5 x 10^(-3) and ionization greater than 1/3 of minimum ionizing muons. Although this limit is above the limits from underground experiments for typical supermassive particles (10^(16)GeV), it is a new limit in certain β and ionization regions for less massive ones (~10^9 GeV) not able to penetrate deep underground, and implies a stringent limit on the fraction of the dark matter that can be composed of massive electrically and/ or magnetically charged particles.

The prospect of the future SIMP search in the MACRO detector is discussed. The special problem of SIMP trigger is examined and a circuit proposed, which may solve most of the problems of the previous ones proposed or used by others and may even enable MACRO to detect certain SIMP species with β as low as the orbital velocity around the earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cystic fibrosis transmembrane conductance regulator (CFTR) is a chloride channel member of the ATP-binding cassette (ABC) superfamily of membrane proteins. CFTR has two homologous halves, each consisting of six transmembrane spanning domains (TM) followed by a nucleotide binding fold, connected by a regulatory (R) domain. This thesis addresses the question of which domains are responsible for Cl^- selectivity, i.e., which domains line the channel pore.

To address this question, novel blockers of CFTR were characterized. CFTR was heterologously expressed in Xenopus oocytes to study the mechanism of block by two closely related arylaminobenzoates, diphenylamine-2-carboxylic acid (DPC) and flufenamic acid (FFA). Block by both is voltage-dependent, with a binding site ≈ 40% through the electric field of the membrane. DPC and FFA can both reach their binding site from either side of the membrane to produce a flickering block of CFTR single channels. In addition, DPC block is influenced by Cl^- concentration, and DPC blocks with a bimolecular forward binding rate and a unimolecular dissociation rate. Therefore, DPC and FFA are open-channel blockers of CFTR, and a residue of CFTR whose mutation affects their binding must line the pore.

Screening of site-directed mutants for altered DPC binding affinity reveals that TM-6 and TM-12 line the pore. Mutation of residue 5341 in TM-6 abolishes most DPC block, greatly reduces single-channel conductance, and alters the direction of current rectification. Additional residues are found in TM-6 (K335) and TM-12 (T1134) whose mutations weaken or strengthen DPC block; other mutations move the DPC binding site from TM-6 to TM-12. The strengthened block and lower conductance due to mutation T1134F is quantitated at the single-channel level. The geometry of DPC and of the residues mutated suggest α-helical structures for TM-6 and TM-12. Evidence is presented that the effects of the mutations are due to direct side-chain interaction, and not to allosteric effects propagated through the protein. Mutations are also made in TM-11, including mutation S1118F, which gives voltage-dependent current relaxations. The results may guide future studies on permeation through ABC transporters and through other Cl^- channels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interaction of SO_2 with γ - Al_2O_3 and the deposition of H_2 permselective SiO_2 films have been investigated. The adsorption and oxidative adsorption of SO_2 on γ - Al_2O_3 have been examined at temperatures 500-700°C by Fourier transform infrared spectroscopy (FTIR) and thermogravimetric analysis (TGA). At temperatures above 500°C most of SO_2 adsorbed on the strong sites on alumina. The adsorbed SO_2 species was characterized by an IR band at 1065 cm^(-1). The equilibrium coverage and initial rate of adsorption decreased with temperature suggesting a two-step adsorption. When γ - Al_2O_3 was contacted with a mixture of SO_2 and O_2, adsorption of SO_2 and oxidation of the adsorbed SO_2 to a surface sulfate characterized by broad IR bands at 1070 cm^(-1), 1390 cm^(-1) took place. The results of a series of TGA experiments under different atmospheres strongly suggest that surface SO_2 and surface sulfate involve the same active sites such that SO_2 adsorption is inhibited by already formed sulfate. The results also indicate a broad range of site strengths.

The desorption of adsorbed SO_2 and the reductive desorption of oxidatively adsorbed SO_2 have been investigated by microreactor experiments and thermogravimetric analysis (TGA). Temperature programmed reduction (TPR) of adsorbed SO_2 showed that SO_2 was desorbed without significant reaction with H_2 when H_2 concentration was low while considerable reaction occurred when 100% H_2 was used. SO_2 adsorbed on the strong sites on alumina was reduced to sulfur and H_2S. The isothermal reduction experiments of oxidatively adsorbed SO_2 reveal that the rate of reduction is very slow below 550°C even with 100% H_2. The reduction product is mainly composed of SO_2. TPR experiments of oxidatively adsorbed SO_2 showed that H_2S arose from a sulfate strongly chemisorbed on the surface.

Films of amorphous SiO_2 were deposited within the walls of porous Vycor tubes by SiH_4 oxidation in an opposing reactants geometry : SiH_4 was passed inside the tube while O_2 was passed outside the tube. The two reactants diffused opposite to each other and reacted within a narrow front inside the tube wall to form a thin SiO_2 film. Once the pores were plugged the reactants could not reach each other and the reaction stopped. At 450°C and 0.1 and 0.33 atm of SiH_4 and O_2, the reaction was complete within 15 minutes. The thickness of the SiO_2 film was estimated to be about 0.1 µm. Measurements of H_2 and N_2 permeation rates showed that the SiO_2 film was highly selective to H_2 permeation. The H_2:N_2 flux at 450°C varied between 2000-3000.

Thin SiO_2 films were heat treated in different gas mixtures to determine their stability in functioning as high-temperature hydrogen-permselective membranes. The films were heat-treated at 450-700°C in dry N_2, dry O_2, N_2-H_2O, and O_2-H_2O mixtures. The permeation rates of H_2 and N_2 changed depending on the original conditions of film formation as well as on the heat treatment. Heating in dry N_2 slowly reduced the permeation rates of both H_2 and N_2. Heating in a N_2-H_2O atmosphere led to a steeper decline of H_2 permeability. But the permeation rate of N_2 increased or decreased according to whether the film deposition had been carried out in the absence or presence of H_2O vapor, respectively. Thermal treatment in O_2 caused rapid decline of the permeation rates of H_2 and N_2 in films that were deposited under dry conditions. The decline was moderate in films deposited under wet conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secondary-ion mass spectrometry (SIMS), electron probe analysis (EPMA), analytical scanning electron microscopy (SEM) and infrared (IR) spectroscopy were used to determine the chemical composition and the mineralogy of sub-micrometer inclusions in cubic diamonds and in overgrowths (coats) on octahedral diamonds from Zaire, Botswana, and some unknown localities.

The inclusions are sub-micrometer in size. The typical diameter encountered during transmission electron microscope (TEM) examination was 0.1-0.5 µm. The micro-inclusions are sub-rounded and their shape is crystallographically controlled by the diamond. Normally they are not associated with cracks or dislocations and appear to be well isolated within the diamond matrix. The number density of inclusions is highly variable on any scale and may reach 10^(11) inclusions/cm^3 in the most densely populated zones. The total concentration of metal oxides in the diamonds varies between 20 and 1270 ppm (by weight).

SIMS analysis yields the average composition of about 100 inclusions contained in the sputtered volume. Comparison of analyses of different volumes of an individual diamond show roughly uniform composition (typically ±10% relative). The variation among the average compositions of different diamonds is somewhat greater (typically ±30%). Nevertheless, all diamonds exhibit similar characteristics, being rich in water, carbonate, SiO_2, and K_2O, and depleted in MgO. The composition of micro-inclusions in most diamonds vary within the following ranges: SiO_2, 30-53%; K_2O, 12-30%; CaO, 8-19%; FeO, 6-11%; Al_2O_3, 3-6%; MgO, 2-6%; TiO_2, 2-4%; Na_2O, 1-5%; P_2O_5, 1-4%; and Cl, 1-3%. In addition, BaO, 1-4%; SrO, 0.7-1.5%; La_2O_3, 0.1-0.3%; Ce_2O_3, 0.3-0.5%; smaller amounts of other rare-earth elements (REE), as well as Mn, Th, and U were also detected by instrumental neutron activation analysis (INAA). Mg/(Fe+Mg), 0.40-0.62 is low compared with other mantle derived phases; K/ AI ratios of 2-7 are very high, and the chondrite-normalized Ce/Eu ratios of 10-21 are also high, indicating extremely fractionated REE patterns.

SEM analyses indicate that individual inclusions within a single diamond are roughly of similar composition. The average composition of individual inclusions as measured with the SEM is similar to that measured by SIMS. Compositional variations revealed by the SEM are larger than those detected by SIMS and indicate a small variability in the composition of individual inclusions. No compositions of individual inclusions were determined that might correspond to mono-mineralic inclusions.

IR spectra of inclusion- bearing zones exhibit characteristic absorption due to: (1) pure diamonds, (2) nitrogen and hydrogen in the diamond matrix; and (3) mineral phases in the micro-inclusions. Nitrogen concentrations of 500-1100 ppm, typical of the micro-inclusion-bearing zones, are higher than the average nitrogen content of diamonds. Only type IaA centers were detected by IR. A yellow coloration may indicate small concentration of type IB centers.

The absorption due to the micro-inclusions in all diamonds produces similar spectra and indicates the presence of hydrated sheet silicates (most likely, Fe-rich clay minerals), carbonates (most likely calcite), and apatite. Small quantities of molecular CO_2 are also present in most diamonds. Water is probably associated with the silicates but the possibility of its presence as a fluid phase cannot be excluded. Characteristic lines of olivine, pyroxene and garnet were not detected and these phases cannot be significant components of the inclusions. Preliminary quantification of the IR data suggests that water and carbonate account for, on average, 20-40 wt% of the micro-inclusions.

The composition and mineralogy of the micro-inclusions are completely different from those of the more common, larger inclusions of the peridotitic or eclogitic assemblages. Their bulk composition resembles that of potassic magmas, such as kimberlites and lamproites, but is enriched in H_2O, CO_3, K_2O, and incompatible elements, and depleted in MgO.

It is suggested that the composition of the micro-inclusions represents a volatile-rich fluid or a melt trapped by the diamond during its growth. The high content of K, Na, P, and incompatible elements suggests that the trapped material found in the micro-inclusions may represent an effective metasomatizing agent. It may also be possible that fluids of similar composition are responsible for the extreme enrichment of incompatible elements documented in garnet and pyroxene inclusions in diamonds.

The origin of the fluid trapped in the micro-inclusions is still uncertain. It may have been formed by incipient melting of a highly metasomatized mantle rocks. More likely, it is the result of fractional crystallization of a potassic parental magma at depth. In either case, the micro-inclusions document the presence of highly potassic fluids or melts at depths corresponding to the diamond stability field in the upper mantle. The phases presently identified in the inclusions are believed to be the result of closed system reactions at lower pressures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For a hungry fruit fly, locating and landing on a fermenting fruit where it can feed, find mates, and lay eggs, is an essential and difficult task requiring the integration of both olfactory and visual cues. Understanding how flies accomplish this will help provide a comprehensive ethological context for the expanding knowledge of their neural circuits involved in processing olfaction and vision, as well as inspire novel engineering solutions for control and estimation in computationally limited robotic applications. In this thesis, I use novel high throughput methods to develop a detailed overview of how flies track odor plumes, land, and regulate flight speed. Finally, I provide an example of how these insights can be applied to robotic applications to simplify complicated estimation problems. To localize an odor source, flies exhibit three iterative, reflex-driven behaviors. Upon encountering an attractive plume, flies increase their flight speed and turn upwind using visual cues. After losing the plume, flies begin zigzagging crosswind, again using visual cues to control their heading. After sensing an attractive odor, flies become more attracted to small visual features, which increases their chances of finding the plume source. Their changes in heading are largely controlled by open-loop maneuvers called saccades, which they direct towards and away from visual features. If a fly decides to land on an object, it begins to decelerate so as to maintain a stereotypical ratio of expansion to retinal size. Once they reach a stereotypical distance from the target, flies extend their legs in preparation for touchdown. Although it is unclear what cues they use to trigger this behavior, previous studies have indicated that it is likely under visual control. In Chapter 3, I use a nonlinear control theoretic analysis and robotic testbed to propose a novel and putative mechanism for how a fly might visually estimate distance by actively decelerating according to a visual control law. Throughout these behaviors, a common theme is the visual control of flight speed. Using genetic tools I show that the neuromodulator octopamine plays an important role in regulating flight speed, and propose a neural circuit for how this controller might be implemented in the flies brain. Two general biological and engineering principles are evident across my experiments: (1) complex behaviors, such as foraging, can emerge from the interactions of simple independent sensory-motor modules; (2) flies control their behavior in such a way that simplifies complex estimation problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.

Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.

Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.

The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.

Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.

The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.

This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.

Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.

The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.

A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.

High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.

Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.

Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1-6 MeV electron flux at 1 AU has been measured for the time period October 1972 to December 1977 by the Caltech Electron/Isotope Spectrometers on the IMP-7 and IMP-8 satellites. The non-solar interplanetary electron flux reported here covered parts of five synodic periods. The 88 Jovian increases identified in these five synodic periods were classified by their time profiles. The fall time profiles were consistent with an exponential fall with τ ≈ 4-9 days. The rise time profiles displayed a systematic variation over the synodic period. Exponential rise time profiles with τ ≈ 1-3 days tended to occur in the time period before nominal connection, diffusive profiles predicted by the convection-diffusion model around nominal connection, and abrupt profiles after nominal connection.

The times of enhancements in the magnetic field, │B│, at 1 AU showed a better correlation than corotating interaction regions (CIR's) with Jovian increases and other changes in the electron flux at 1 AU, suggesting that │B│ enhancements indicate the times that barriers to electron propagation pass Earth. Time sequences of the increases and decreases in the electron flux at 1 AU were qualitatively modeled by using the times that CIR's passed Jupiter and the times that │B│ enhancements passed Earth.

The electron data observed at 1 AU were modeled by using a convection-diffusion model of Jovian electron propagation. The synodic envelope formed by the maxima of the Jovian increases was modeled by the envelope formed by the predicted intensities at a time less than that needed to reach equilibrium. Even though the envelope shape calculated in this way was similar to the observed envelope, the required diffusion coefficients were not consistent with a diffusive process.

Three Jovian electron increases at 1 AU for the 1974 synodic period were fit with rise time profiles calculated from the convection-diffusion model. For the fits without an ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 1.0 - 2.5 x 1021 cm2/sec and ky = 1.6 - 2.0 x 1022 cm2/sec. For the fits that included the ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 0.4 - 1.0 x 1021 cm2/sec and ky = 0.8 - 1.3 x 1022 cm2/sec.