8 resultados para network metabolismo flux analysis markov recon

em CaltechTHESIS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hematopoiesis is a well-established system used to study developmental choices amongst cells with multiple lineage potentials, as well as the transcription factor network interactions that drive these developmental paths. Multipotent progenitors travel from the bone marrow to the thymus where T-cell development is initiated and these early T-cell precursors retain lineage plasticity even after initiating a T-cell program. The development of these early cells is driven by Notch signaling and the combinatorial expression of many transcription factors, several of which are also involved in the development of other cell lineages. The ETS family transcription factor PU.1 is involved in the development of progenitor, myeloid, and lymphoid cells, and can divert progenitor T-cells from the T-lineage to a myeloid lineage. This diversion of early T-cells by PU.1 can be blocked by Notch signaling. The PU.1 and Notch interaction creates a switch wherein PU.1 in the presence of Notch promotes T-cell identity and PU.1 in the absence of Notch signaling promotes a myeloid identity. Here we characterized an early T-cell cell line, Scid.adh.2c2, as a good model system for studying the myeloid vs. lymphoid developmental choice dependent on PU.1 and Notch signaling. We then used the Scid.adh.2c2 system to identify mechanisms mediating PU.1 and Notch signaling interactions during early T-cell development. We show that the mechanism by which Notch signaling is protecting pro-T cells is neither degradation nor modification of the PU.1 protein. Instead we give evidence that Notch signaling is blocking the PU.1-driven inhibition of a key set of T-regulatory genes including Myb, Tcf7, and Gata3. We show that the protection of Gata3 from PU.1-mediated inhibition, by Notch signaling and Myb, is important for retaining a T-lineage identity. We also discuss a PU.1-driven mechanism involving E-protein inhibition that leads to the inhibition of Notch target genes. This is mechanism may be used as a lockdown mechanism in pro-T-cells that have made the decision to divert to the myeloid pathway.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cdc48/p97 is an essential, highly abundant hexameric member of the AAA (ATPase associated with various cellular activities) family. It has been linked to a variety of processes throughout the cell but it is best known for its role in the ubiquitin proteasome pathway. In this system it is believed that Cdc48 behaves as a segregase, transducing the chemical energy of ATP hydrolysis into mechanical force to separate ubiquitin-conjugated proteins from their tightly-bound partners.

Current models posit that Cdc48 is linked to its substrates through a variety of adaptor proteins, including a family of seven proteins (13 in humans) that contain a Cdc48-binding UBX domain. As such, due to the complexity of the network of adaptor proteins for which it serves as the hub, Cdc48/p97 has the potential to exert a profound influence on the ubiquitin proteasome pathway. However, the number of known substrates of Cdc48/p97 remains relatively small, and smaller still is the number of substrates that have been linked to a specific UBX domain protein. As such, the goal of this dissertation research has been to discover new substrates and better understand the functions of the Cdc48 network. With this objective in mind, we established a proteomic screen to assemble a catalog of candidate substrate/targets of the Ubx adaptor system.

Here we describe the implementation and optimization of a cutting-edge quantitative mass spectrometry method to measure relative changes in the Saccharomyces cerevisiae proteome. Utilizing this technology, and in order to better understand the breadth of function of Cdc48 and its adaptors, we then performed a global screen to identify accumulating ubiquitin conjugates in cdc48-3 and ubxΔ mutants. In this screen different ubx mutants exhibited reproducible patterns of conjugate accumulation that differed greatly from each other, pointing to various unexpected functional specializations of the individual Ubx proteins.

As validation of our mass spectrometry findings, we then examined in detail the endoplasmic-reticulum bound transcription factor Spt23, which we identified as a putative Ubx2 substrate. In these studies ubx2Δ cells were deficient in processing of Spt23 to its active p90 form, and in localizing p90 to the nucleus. Additionally, consistent with reduced processing of Spt23, ubx2Δ cells demonstrated a defect in expression of their target gene OLE1, a fatty acid desaturase. Overall, this work demonstrates the power of proteomics as a tool to identify new targets of various pathways and reveals Ubx2 as a key regulator lipid membrane biosynthesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.