11 resultados para reaching

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensory-motor circuits course through the parietal cortex of the human and monkey brain. How parietal cortex manipulates these signals has been an important question in behavioral neuroscience. This thesis presents experiments that explore the contributions of monkey parietal cortex to sensory-motor processing, with an emphasis on the area's contributions to reaching. First, it is shown that parietal cortex is organized into subregions devoted to specific movements. Area LIP encodes plans to make saccadic eye movements. A nearby area, the parietal reach region (PRR), plans reaches. A series of experiments are then described which explore the contributions of PRR to reach planning. Reach plans are represented in an eye-centered reference frame in PRR. This representation is shown to be stable across eye movements. When a sequence of reaches is planned, only the impending movement is represented in PRR, showing that the area is more related to movement planning than to storing the memory of reach targets. PRR resembles area LIP in each of these properties: the two areas may provide a substrate for hand-eye coordination. These findings yield new perspectives on the functions of the parietal cortex and on the organization of sensory-motor processing in primate brains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.

The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.

A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.

High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermoelectric materials have demanded a significant amount of attention for their ability to convert waste heat directly to electricity with no moving parts. A resurgence in thermoelectrics research has led to significant enhancements in the thermoelectric figure of merit, zT, even for materials that were already well studied. This thesis approaches thermoelectric zT optimization by developing a detailed understanding of the electronic structure using a combination of electronic/thermoelectric properties, optical properties, and ab-initio computed electronic band structures. This is accomplished by applying these techniques to three important classes of thermoelectric materials: IV-VI materials (the lead chalcogenides), Half-Heusler’s (XNiSn where X=Zr, Ti, Hf), and CoSb3 skutterudites.

In the IV-VI materials (PbTe, PbSe, PbS) I present a shifting temperature-dependent optical absorption edge which correlates well to the computed ab-initio molecular dynamics result. Contrary to prior literature that suggests convergence of the primary and secondary bands at 400 K, I suggest a higher convergence temperature of 700, 900, and 1000 K for PbTe, PbSe, and PbS, respectively. This finding can help guide electronic properties modelling by providing a concrete value for the band gap and valence band offset as a function of temperature.

Another important thermoelectric material, ZrNiSn (half-Heusler), is analyzed for both its optical and electronic properties; transport properties indicate a largely different band gap depending on whether the material is doped n-type or p-type. By measuring and reporting the optical band gap value of 0.13 eV, I resolve the discrepancy in the gap calculated from electronic properties (maximum Seebeck and resistivity) by correlating these estimates to the electron-to-hole weighted mobility ratio, A, in narrow gap materials (A is found to be approximately 5.0 in ZrNiSn).

I also show that CoSb3 contains multiple conduction bands that contribute to the thermoelectric properties. These bands are also observed to shift towards each other with temperature, eventually reaching effective convergence for T>500 K. This implies that the electronic structure in CoSb3 is critically important (and possibly engineerable) with regards to its high thermoelectric figure of merit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.

Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.

However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.

Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equations of motion for the flow of a mixture of liquid droplets, their vapor, and an inert gas through a normal shock wave are derived. A set of equations is obtained which is solved numerically for the equilibrium conditions far downstream of the shock. The equations describing the process of reaching equilibrium are also obtained. This is a set of first-order nonlinear differential equations and must also be solved numerically. The detailed equilibration process is obtained for several cases and the results are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A variety of neural signals have been measured as correlates to consciousness. In particular, late current sinks in layer 1, distributed activity across the cortex, and feedback processing have all been implicated. What are the physiological underpinnings of these signals? What computational role do they play in the brain? Why do they correlate to consciousness? This thesis begins to answer these questions by focusing on the pyramidal neuron. As the primary communicator of long-range feedforward and feedback signals in the cortex, the pyramidal neuron is set up to play an important role in establishing distributed representations. Additionally, the dendritic extent, reaching layer 1, is well situated to receive feedback inputs and contribute to current sinks in the upper layers. An investigation of pyramidal neuron physiology is therefore necessary to understand how the brain creates, and potentially uses, the neural correlates of consciousness. An important part of this thesis will be in establishing the computational role that dendritic physiology plays. In order to do this, a combined experimental and modeling approach is used.

This thesis beings with single-cell experiments in layer 5 and layer 2/3 pyramidal neurons. In both cases, dendritic nonlinearities are characterized and found to be integral regulators of neural output. Particular attention is paid to calcium spikes and NMDA spikes, which both exist in the apical dendrites, considerable distances from the spike initiation zone. These experiments are then used to create detailed multicompartmental models. These models are used to test hypothesis regarding spatial distribution of membrane channels, to quantify the effects of certain experimental manipulations, and to establish the computational properties of the single cell. We find that the pyramidal neuron physiology can carry out a coincidence detection mechanism. Further abstraction of these models reveals potential mechanisms for spike time control, frequency modulation, and tuning. Finally, a set of experiments are carried out to establish the effect of long-range feedback inputs onto the pyramidal neuron. A final discussion then explores a potential way in which the physiology of pyramidal neurons can establish distributed representations, and contribute to consciousness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hair cells from the bull frog's sacculus, a vestibular organ responding to substrate-borne vibration, possess electrically resonant membrane properties which maximize the sensitivity of each cell to a particular frequency of mechanical input. The electrical resonance of these cells and its underlying ionic basis were studied by applying gigohm-seal recording techniques to solitary hair cells enzymatically dissociated from the sacculus. The contribution of electrical resonance to frequency selectivity was assessed from microelectrode recordings from hair cells in an excised preparation of the sacculus.

Electrical resonance in the hair cell is demonstrated by damped membrane-potential oscillations in response to extrinsic current pulses applied through the recording pipette. This response is analyzed as that of a damped harmonic oscillator. Oscillation frequency rises with membrane depolarization, from 80-160 Hz at resting potential to asymptotic values of 200-250 Hz. The sharpness of electrical tuning, denoted by the electrical quality factor, Qe, is a bell-shaped function of membrane voltage, reaching a maximum value around eight at a membrane potential slightly positive to the resting potential.

In whole cells, three time-variant ionic currents are activated at voltages more positive than -60 to -50 mV; these are identified as a voltage-dependent, non-inactivating Ca current (Ica), a voltage-dependent, transient K current (Ia), and a Ca-dependent K current (Ic). The C channel is identified in excised, inside-out membrane patches on the basis of its large conductance (130-200 pS), its selective permeability to Kover Na or Cl, and its activation by internal Ca ions and membrane depolarization. Analysis of open- and closed-lifetime distributions suggests that the C channel can assume at least two open and three closed kinetic states.

Exposing hair cells to external solutions that inhibit the Ca or C conductances degrades the electrical resonance properties measured under current-clamp conditions, while blocking the A conductance has no significant effect, providing evidence that only the Ca and C conductances participate in the resonance mechanism. To test the sufficiency of these two conductances to account for electrical resonance, a mathematical model is developed that describes Ica, Ic, and intracellular Ca concentration during voltage-clamp steps. Ica activation is approximated by a third-order Hodgkin-Huxley kinetic scheme. Ca entering the cell is assumed to be confined to a small submembrane compartment which contains an excess of Ca buffer; Ca leaves this space with first-order kinetics. The Ca- and voltage-dependent activation of C channels is described by a five-state kinetic scheme suggested by the results of single-channel observations. Parameter values in the model are adjusted to fit the waveforms of Ica and Ic evoked by a series of voltage-clamp steps in a single cell. Having been thus constrained, the model correctly predicts the character of voltage oscillations produced by current-clamp steps, including the dependencies of oscillation frequency and Qe on membrane voltage. The model shows quantitatively how the Ca and C conductances interact, via changes in intracellular Ca concentration, to produce electrical resonance in a vertebrate hair cell.