18 resultados para stimulus overlapping

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lateral intraparietal area (LIP) of macaque posterior parietal cortex participates in the sensorimotor transformations underlying visually guided eye movements. Area LIP has long been considered unresponsive to auditory stimulation. However, recent studies have shown that neurons in LIP respond to auditory stimuli during an auditory-saccade task, suggesting possible involvement of this area in auditory-to-oculomotor as well as visual-to-oculomotor processing. This dissertation describes investigations which clarify the role of area LIP in auditory-to-oculomotor processing.

Extracellular recordings were obtained from a total of 332 LIP neurons in two macaque monkeys, while the animals performed fixation and saccade tasks involving auditory and visual stimuli. No auditory activity was observed in area LIP before animals were trained to make saccades to auditory stimuli, but responses to auditory stimuli did emerge after auditory-saccade training. Auditory responses in area LIP after auditory-saccade training were significantly stronger in the context of an auditory-saccade task than in the context of a fixation task. Compared to visual responses, auditory responses were also significantly more predictive of movement-related activity in the saccade task. Moreover, while visual responses often had a fast transient component, responses to auditory stimuli in area LIP tended to be gradual in onset and relatively prolonged in duration.

Overall, the analyses demonstrate that responses to auditory stimuli in area LIP are dependent on auditory-saccade training, modulated by behavioral context, and characterized by slow-onset, sustained response profiles. These findings suggest that responses to auditory stimuli are best interpreted as supramodal (cognitive or motor) responses, rather than as modality-specific sensory responses. Auditory responses in area LIP seem to reflect the significance of auditory stimuli as potential targets for eye movements, and may differ from most visual responses in the extent to which they arc abstracted from the sensory parameters of the stimulus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The temporal structure of neuronal spike trains in the visual cortex can provide detailed information about the stimulus and about the neuronal implementation of visual processing. Spike trains recorded from the macaque motion area MT in previous studies (Newsome et al., 1989a; Britten et al., 1992; Zohary et al., 1994) are analyzed here in the context of the dynamic random dot stimulus which was used to evoke them. If the stimulus is incoherent, the spike trains can be highly modulated and precisely locked in time to the stimulus. In contrast, the coherent motion stimulus creates little or no temporal modulation and allows us to study patterns in the spike train that may be intrinsic to the cortical circuitry in area MT. Long gaps in the spike train evoked by the preferred direction motion stimulus are found, and they appear to be symmetrical to bursts in the response to the anti-preferred direction of motion. A novel cross-correlation technique is used to establish that the gaps are correlated between pairs of neurons. Temporal modulation is also found in psychophysical experiments using a modified stimulus. A model is made that can account for the temporal modulation in terms of the computational theory of biological image motion processing. A frequency domain analysis of the stimulus reveals that it contains a repeated power spectrum that may account for psychophysical and electrophysiological observations.

Some neurons tend to fire bursts of action potentials while others avoid burst firing. Using numerical and analytical models of spike trains as Poisson processes with the addition of refractory periods and bursting, we are able to account for peaks in the power spectrum near 40 Hz without assuming the existence of an underlying oscillatory signal. A preliminary examination of the local field potential reveals that stimulus-locked oscillation appears briefly at the beginning of the trial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cells in the lateral intraparietal cortex (LIP) of rhesus macaques respond vigorously and in spatially-tuned fashion to briefly memorized visual stimuli. Responses to stimulus presentation, memory maintenance, and task completion are seen, in varying combination from neuron to neuron. To help elucidate this functional segmentation a new system for simultaneous recording from multiple neighboring neurons was developed. The two parts of this dissertation discuss the technical achievements and scientific discoveries, respectively.

Technology. Simultanous recordings from multiple neighboring neurons were made with four-wire bundle electrodes, or tetrodes, which were adapted to the awake behaving primate preparation. Signals from these electrodes were partitionable into a background process with a 1/f-like spectrum and foreground spiking activity spanning 300-6000 Hz. Continuous voltage recordings were sorted into spike trains using a state-of-the-art clustering algorithm, producing a mean of 3 cells per site. The algorithm classified 96% of spikes correctly when tetrode recordings were confirmed with simultaneous intracellular signals. Recording locations were verified with a new technique that creates electrolytic lesions visible in magnetic resonance imaging, eliminating the need for histological processing. In anticipation of future multi-tetrode work, the chronic chamber microdrive, a device for long-term tetrode delivery, was developed.

Science. Simultaneously recorded neighboring LIP neurons were found to have similar preferred targets in the memory saccade paradigm, but dissimilar peristimulus time histograms, PSTH). A majority of neighboring cell pairs had a difference in preferred directions of under 45° while the trial time of maximal response showed a broader distribution, suggesting homogeneity of tuning with het erogeneity of function. A continuum of response characteristics was present, rather than a set of specific response types; however, a mapping experiment suggests this may be because a given cell's PSTH changes shape as well as amplitude through the response field. Spike train autocovariance was tuned over target and changed through trial epoch, suggesting different mechanisms during memory versus background periods. Mean frequency-domain spike-to-spike coherence was concentrated below 50 Hz with a significant maximum of 0.08; mean time-domain coherence had a narrow peak in the range ±10 ms with a significant maximum of 0.03. Time-domain coherence was found to be untuned for short lags (10 ms), but significantly tuned at larger lags (50 ms).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neurons in the primate lateral intraparietal area (area LIP) carry visual, saccade-related and eye position activities. The visual and saccade activities are anchored in a retinotopic framework and the overall response magnitude is modulated by eye position. It was proposed that the modulation by eye position might be the basis of a distributed coding of target locations in a head-centered space. Other recording studies demonstrated that area LIP is involved in oculomotor planning. These results overall suggest that area LIP transforms sensory information for motor functions. In this thesis I further explore the role of area LIP in processing saccadic eye movements by observing the effects of reversible inactivation of this area. Macaque monkeys were trained to do visually guided and memory saccades and a double saccade task to examine the use of eye position signal. Finally, by intermixing visual saccades with trials in which two targets were presented at opposite sides of the fixation point, I examined the behavior of visual extinction.

In chapter 2, I will show that lesion of area LIP results in increased latency of contralesional visual and memory saccades. Contralesional memory saccades are also hypometric and slower in velocity. Moreover, the impairment of memory saccades does not vary with the duration of the delay period. This suggests that the oculomotor deficits observed after inactivation of area LIP is not due to the disruption of spatial memory.

In chapter 3, I will show that lesion of area LIP does not severely affect the processing of spontaneous eye movement. However, the monkeys made fewer contralesional saccades and tended to confine their gaze to the ipsilesional field after inactivation of area LIP. On the other hand, lesion of area LIP results in extinction of the contralesional stimulus. When the initial fixation position was varied so that the retinal and spatial locations of the targets could be dissociated, it was found that the extinction behavior could best be described in a head-centered coordinate.

In chapter 4, I will show that inactivation of area LIP disrupts the use of eye position signal to compute the second movement correctly in the double saccade task. If the first saccade steps into the contralesional field, the error rate and latency of the second saccade are both increased. Furthermore, the direction of the first eye movement largely does not have any effect on the impairment of the second saccade. I will argue that this study provides important evidence that the extraretinal signal used for saccadic localization is eye position rather than a displacement vector.

In chapter 5, I will demonstrate that in parietal monkeys the eye drifts toward the lesion side at the end of the memory saccade in darkness. This result suggests that the eye position activity in the posterior parietal cortex is active in nature and subserves gaze holding.

Overall, these results further support the view that area LIP neurons encode spatial locations in a craniotopic framework and is involved in processing voluntary eye movements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA damage is extremely detrimental to the cell and must be repaired to protect the genome. DNA is capable of conducting charge through the overlapping π-orbitals of stacked bases; this phenomenon is extremely sensitive to the integrity of the π-stack, as perturbations attenuate DNA charge transport (CT). Based on the E. coli base excision repair (BER) proteins EndoIII and MutY, it has recently been proposed that redox-active proteins containing metal clusters can utilize DNA CT to signal one another to locate sites of DNA damage.

To expand our repertoire of proteins that utilize DNA-mediated signaling, we measured the DNA-bound redox potential of the nucleotide excision repair (NER) helicase XPD from Sulfolobus acidocaldarius. A midpoint potential of 82 mV versus NHE was observed, resembling that of the previously reported BER proteins. The redox signal increases in intensity with ATP hydrolysis in only the WT protein and mutants that maintain ATPase activity and not for ATPase-deficient mutants. The signal increase correlates directly with ATP activity, suggesting that DNA-mediated signaling may play a general role in protein signaling. Several mutations in human XPD that lead to XP-related diseases have been identified; using SaXPD, we explored how these mutations, which are conserved in the thermophile, affect protein electrochemistry.

To further understand the electrochemical signaling of XPD, we studied the yeast S. cerevisiae Rad3 protein. ScRad3 mutants were incubated on a DNA-modified electrode and exhibited a similar redox potential to SaXPD. We developed a haploid strain of S. cerevisiae that allowed for easy manipulation of Rad3. In a survival assay, the ATPase- and helicase-deficient mutants show little survival, while the two disease-related mutants exhibit survival similar to WT. When both a WT and G47R (ATPase/helicase deficient) strain were challenged with different DNA damaging agents, both exhibited comparable survival in the presence of hydroxyurea, while with methyl methanesulfonate and camptothecin, the G47R strain exhibits a significant change in growth, suggesting that Rad3 is involved in repairing damage beyond traditional NER substrates. Together, these data expand our understanding of redox-active proteins at the interface of DNA repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.

The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.

Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.

Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.

An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bio-orthogonal non-canonical amino acid tagging (BONCAT) is an analytical method that allows the selective analysis of the subset of newly synthesized cellular proteins produced in response to a biological stimulus. In BONCAT, cells are treated with the non-canonical amino acid L-azidohomoalanine (Aha), which is utilized in protein synthesis in place of methionine by wild-type translational machinery. Nascent, Aha-labeled proteins are selectively ligated to affinity tags for enrichment and subsequently identified via mass spectrometry. The work presented in this thesis exhibits advancements in and applications of the BONCAT technology that establishes it as an effective tool for analyzing proteome dynamics with time-resolved precision.

Chapter 1 introduces the BONCAT method and serves as an outline for the thesis as a whole. I discuss motivations behind the methodological advancements in Chapter 2 and the biological applications in Chapters 2 and 3.

Chapter 2 presents methodological developments that make BONCAT a proteomic tool capable of, in addition to identifying newly synthesized proteins, accurately quantifying rates of protein synthesis. I demonstrate that this quantitative BONCAT approach can measure proteome-wide patterns of protein synthesis at time scales inaccessible to alternative techniques.

In Chapter 3, I use BONCAT to study the biological function of the small RNA regulator CyaR in Escherichia coli. I correctly identify previously known CyaR targets, and validate several new CyaR targets, expanding the functional roles of the sRNA regulator.

In Chapter 4, I use BONCAT to measure the proteomic profile of the quorum sensing bacterium Vibrio harveyi during the time-dependent transition from individual- to group-behaviors. My analysis reveals new quorum-sensing-regulated proteins with diverse functions, including transcription factors, chemotaxis proteins, transport proteins, and proteins involved in iron homeostasis.

Overall, this work describes how to use BONCAT to perform quantitative, time-resolved proteomic analysis and demonstrates that these measurements can be used to study a broad range of biological processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several patients of P. J. Vogel who had undergone cerebral commissurotomy for the control of intractable epilepsy were tested on a variety of tasks to measure aspects of cerebral organization concerned with lateralization in hemispheric function. From tests involving identification of shapes it was inferred that in the absence of the neocortical commissures, the left hemisphere still has access to certain types of information from the ipsilateral field. The major hemisphere can still make crude differentiations between various left-field stimuli, but is unable to specify exact stimulus properties. Most of the time the major hemisphere, having access to some ipsilateral stimuli, dominated the minor hemisphere in control of the body.

Competition for control of the body between the hemispheres is seen most clearly in tests of minor hemisphere language competency, in which it was determined that though the minor hemisphere does possess some minimal ability to express language, the major hemisphere prevented its expression much of the time. The right hemisphere was superior to the left in tests of perceptual visualization, and the two hemispheres appeared to use different strategies in attempting to solve the problems, namely, analysis for the left hemisphere and synthesis for the right hemisphere.

Analysis of the patients' verbal and performance I.Q.'s, as well as observations made throughout testing, suggest that the corpus callosum plays a critical role in activities that involve functions in which the minor hemisphere normally excels, that the motor expression of these functions may normally come through the major hemisphere by way of the corpus callosum.

Lateral specialization is thought to be an evolutionary adaptation which overcame problems of a functional antagonism between the abilities normally associated with the two hemispheres. The tests of perception suggested that this function lateralized into the mute hemisphere because of an active counteraction by language. This latter idea was confirmed by the finding that left-handers, in whom there is likely to be bilateral language centers, are greatly deficient on tests of perception.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.