10 resultados para Surgical technique and possible pitfalls

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assembling a nervous system requires exquisite specificity in the construction of neuronal connectivity. One method by which such specificity is implemented is the presence of chemical cues within the tissues, differentiating one region from another, and the presence of receptors for those cues on the surface of neurons and their axons that are navigating within this cellular environment.

Connections from one part of the nervous system to another often take the form of a topographic mapping. One widely studied model system that involves such a mapping is the vertebrate retinotectal projection-the set of connections between the eye and the optic tectum of the midbrain, which is the primary visual center in non-mammals and is homologous to the superior colliculus in mammals. In this projection the two-dimensional surface of the retina is mapped smoothly onto the two-dimensional surface of the tectum, such that light from neighboring points in visual space excites neighboring cells in the brain. This mapping is implemented at least in part via differential chemical cues in different regions of the tectum.

The Eph family of receptor tyrosine kinases and their cell-surface ligands, the ephrins, have been implicated in a wide variety of processes, generally involving cellular movement in response to extracellular cues. In particular, they possess expression patterns-i.e., complementary gradients of receptor in retina and ligand in tectum- and in vitro and in vivo activities and phenotypes-i.e., repulsive guidance of axons and defective mapping in mutants, respectively-consistent with the long-sought retinotectal chemical mapping cues.

The tadpole of Xenopus laevis, the South African clawed frog, is advantageous for in vivo retinotectal studies because of its transparency and manipulability. However, neither the expression patterns nor the retinotectal roles of these proteins have been well characterized in this system. We report here comprehensive descriptions in swimming stage tadpoles of the messenger RNA expression patterns of eleven known Xenopus Eph and ephrin genes, including xephrin-A3, which is novel, and xEphB2, whose expression pattern has not previously been published in detail. We also report the results of in vivo protein injection perturbation studies on Xenopus retinotectal topography, which were negative, and of in vitro axonal guidance assays, which suggest a previously unrecognized attractive activity of ephrins at low concentrations on retinal ganglion cell axons. This raises the possibility that these axons find their correct targets in part by seeking out a preferred concentration of ligands appropriate to their individual receptor expression levels, rather than by being repelled to greater or lesser degrees by the ephrins but attracted by some as-yet-unknown cue(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analytic technique is developed that couples to finite difference calculations to extend the results to arbitrary distance. Finite differences and the analytic result, a boundary integral called two-dimensional Kirchhoff, are applied to simple models and three seismological problems dealing with data. The simple models include a thorough investigation of the seismologic effects of a deep continental basin. The first problem is explosions at Yucca Flat, in the Nevada test site. By modeling both near-field strong-motion records and teleseismic P-waves simultaneously, it is shown that scattered surface waves are responsible for teleseismic complexity. The second problem deals with explosions at Amchitka Island, Alaska. The near-field seismograms are investigated using a variety of complex structures and sources. The third problem involves regional seismograms of Imperial Valley, California earthquakes recorded at Pasadena, California. The data are shown to contain evidence of deterministic structure, but lack of more direct measurements of the structure and possible three-dimensional effects make two-dimensional modeling of these data difficult.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, the amino acid sequences have been reported for several proteins, including the envelope glycoproteins of Sindbis virus, which all probably span the plasma membrane with a common topology: a large N-terminal, extracellular portion, a short region buried in the bilayer, and a short C-terminal intracellular segment. The regions of these proteins buried in the bilayer correspond to portions of the protein sequences which contain a stretch of hydrophobic amino acids and which have other common characteristics, as discussed. Reasons are also described for uncertainty, in some proteins more than others, as to the precise location of some parts of the sequence relative to the membrane.

The signal hypothesis for the transmembrane translocation of proteins is briefly described and its general applicability is reviewed. There are many proteins whose translocation is accurately described by this hypothesis, but some proteins are translocated in a different manner.

The transmembraneous glycoproteins E1 and E2 of Sindbis virus, as well as the only other virion protein, the capsid protein, were purified in amounts sufficient for biochemical analysis using sensitive techniques. The amino acid composition of each protein was determined, and extensive N-terminal sequences were obtained for E1 and E2. By these techniques E1 and E2 are indistinguishable from most water soluble proteins, as they do not contain an obvious excess of hydrophobic amino acids in their N-terminal regions or in the intact molecule.

The capsid protein was found to be blocked, and so its N-terminus could not be sequenced by the usual methods. However, with the use of a special labeling technique, it was possible to incorporate tritiated acetate into the N-terminus of the protein with good specificity, which was useful in the purification of peptides from which the first amino acids in the N-terminal sequence could be identified.

Nanomole amounts of PE2, the intracellular precursor of E2, were purified by an immuno-affinity technique, and its N-terminus was analyzed. Together with other work, these results showed that PE2 is not synthesized with an N-terminal extension, and the signal sequence for translocation is probably the N-terminal amino acid sequence of the protein. This N-terminus was found to be 80-90% blocked, also by Nacetylation, and this acetylation did not affect its function as a signal sequence. The putative signal sequence was also found to contain a glycosylated asparagine residue, but the inhibition of this glycosylation did not lead to the cleavage of the sequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the behavior of granular materials at three length scales. At the smallest length scale, the grain-scale, we study inter-particle forces and "force chains". Inter-particle forces are the natural building blocks of constitutive laws for granular materials. Force chains are a key signature of the heterogeneity of granular systems. Despite their fundamental importance for calibrating grain-scale numerical models and elucidating constitutive laws, inter-particle forces have not been fully quantified in natural granular materials. We present a numerical force inference technique for determining inter-particle forces from experimental data and apply the technique to two-dimensional and three-dimensional systems under quasi-static and dynamic load. These experiments validate the technique and provide insight into the quasi-static and dynamic behavior of granular materials.

At a larger length scale, the mesoscale, we study the emergent frictional behavior of a collection of grains. Properties of granular materials at this intermediate scale are crucial inputs for macro-scale continuum models. We derive friction laws for granular materials at the mesoscale by applying averaging techniques to grain-scale quantities. These laws portray the nature of steady-state frictional strength as a competition between steady-state dilation and grain-scale dissipation rates. The laws also directly link the rate of dilation to the non-steady-state frictional strength.

At the macro-scale, we investigate continuum modeling techniques capable of simulating the distinct solid-like, liquid-like, and gas-like behaviors exhibited by granular materials in a single computational domain. We propose a Smoothed Particle Hydrodynamics (SPH) approach for granular materials with a viscoplastic constitutive law. The constitutive law uses a rate-dependent and dilation-dependent friction law. We provide a theoretical basis for a dilation-dependent friction law using similar analysis to that performed at the mesoscale. We provide several qualitative and quantitative validations of the technique and discuss ongoing work aiming to couple the granular flow with gas and fluid flows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mössbauer technique has been used to study the nuclear hyperfine interactions and lifetimes in W182 (2+ state) and W183 (3/2- and 5/2- states) with the following results: g(5/2-)/g(2+) = 1.40 ± 0.04; g(3/2- = -0.07 ± 0.07; Q(5/2-)/Q(2+) = 0.94 ± 0.04; T1/2(3/2-) = 0.184 ± 0.005 nsec; T1/2(5/2-) >̰ 0.7 nsec. These quantities are discussed in terms of a rotation-particle interaction in W183 due to Coriolis coupling. From the measured quantities and additional information on γ-ray transition intensities magnetic single-particle matrix elements are derived. It is inferred from these that the two effective g-factors, resulting from the Nilsson-model calculation of the single-particle matrix elements for the spin operators ŝz and ŝ+, are not equal, consistent with a proposal of Bochnacki and Ogaza.

The internal magnetic fields at the tungsten nucleus were determined for substitutional solid solutions of tungsten in iron, cobalt, and nickel. With g(2+) = 0.24 the results are: |Heff(W-Fe)| = 715 ± 10 kG; |Heff(W-Co)| = 360 ± 10 kG; |Heff(W-Ni)| = 90 ± 25 kG. The electric field gradients at the tungsten nucleus were determined for WS2 and WO3. With Q(2+) = -1.81b the results are: for WS2, eq = -(1.86 ± 0.05) 1018 V/cm2; for WO3, eq = (1.54 ± 0.04) 1018 V/cm2 and ƞ = 0.63 ± 0.02.

The 5/2- state of Pt195 has also been studied with the Mössbauer technique, and the g-factor of this state has been determined to be -0.41 ± 0.03. The following magnetic fields at the Pt nucleus were found: in an Fe lattice, 1.19 ± 0.04 MG; in a Co lattice, 0.86 ± 0.03 MG; and in a Ni lattice, 0.36 ± 0.04 MG. Isomeric shifts have been detected in a number of compounds and alloys and have been interpreted to imply that the mean square radius of the Pt195 nucleus in the first-excited state is smaller than in the ground state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".