21 resultados para Uniquely ergodic
em CaltechTHESIS
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
This thesis considers in detail the dynamics of two oscillators with weak nonlinear coupling. There are three classes of such problems: non-resonant, where the Poincaré procedure is valid to the order considered; weakly resonant, where the Poincaré procedure breaks down because small divisors appear (but do not affect the O(1) term) and strongly resonant, where small divisors appear and lead to O(1) corrections. A perturbation method based on Cole's two-timing procedure is introduced. It avoids the small divisor problem in a straightforward manner, gives accurate answers which are valid for long times, and appears capable of handling all three types of problems with no change in the basic approach.
One example of each type is studied with the aid of this procedure: for the nonresonant case the answer is equivalent to the Poincaré result; for the weakly resonant case the analytic form of the answer is found to depend (smoothly) on the difference between the initial energies of the two oscillators; for the strongly resonant case we find that the amplitudes of the two oscillators vary slowly with time as elliptic functions of ϵ t, where ϵ is the (small) coupling parameter.
Our results suggest that, as one might expect, the dynamical behavior of such systems varies smoothly with changes in the ratio of the fundamental frequencies of the two oscillators. Thus the pathological behavior of Whittaker's adelphic integrals as the frequency ratio is varied appears to be due to the fact that Whittaker ignored the small divisor problem. The energy sharing properties of these systems appear to depend strongly on the initial conditions, so that the systems not ergodic.
The perturbation procedure appears to be applicable to a wide variety of other problems in addition to those considered here.
Resumo:
Seismic structure above and below the core-mantle boundary (CMB) has been studied through use of travel time and waveform analyses of several different seismic wave groups. Anomalous systematic trends in observables document mantle heterogeneity on both large and small scales. Analog and digital data has been utilized, and in many cases the analog data has been optically scanned and digitized prior to analysis.
Differential travel times of S - SKS are shown to be an excellent diagnostic of anomalous lower mantle shear velocity (V s) structure. Wavepath geometries beneath the central Pacific exhibit large S- SKS travel time residuals (up to 10 sec), and are consistent with a large scale 0(1000 km) slower than average V_s region (≥3%). S - SKS times for paths traversing this region exhibit smaller scale patterns and trends 0(100 km) indicating V_s perturbations on many scale lengths. These times are compared to predictions of three tomographically derived aspherical models: MDLSH of Tanimoto [1990], model SH12_WM13 of Suet al. [1992], and model SH.10c.17 of Masters et al. [1992]. Qualitative agreement between the tomographic model predictions and observations is encouraging, varying from fair to good. However, inconsistencies are present and suggest anomalies in the lower mantle of scale length smaller than the present 2000+ km scale resolution of tomographic models. 2-D wave propagation experiments show the importance of inhomogeneous raypaths when considering lateral heterogeneities in the lowermost mantle.
A dataset of waveforms and differential travel times of S, ScS, and the arrival from the D" layer, Scd, provides evidence for a laterally varying V_s velocity discontinuity at the base of the mantle. Two different localized D" regions beneath the central Pacific have been investigated. Predictions from a model having a V_s discontinuity 180 km above the CMB agree well with observations for an eastern mid-Pacific CMB region. This thickness differs from V_s discontinuity thicknesses found in other regions, such as a localized region beneath the western Pacific, which average near 280 km. The "sharpness" of the V_s jump at the top of D", i.e., the depth range over which the V_s increase occurs, is not resolved by our data, and our data can in fact may be modeled equally well by a lower mantle with the increase in V_s at the top of D" occurring over a 100 krn depth range. It is difficult at present to correlate D" thicknesses from this study to overall lower mantle heterogeneity, due to uncertainties in the 3-D models, as well as poor coverage in maps of D" discontinuity thicknesses.
P-wave velocity structure (V_p) at the base of the mantle is explored using the seismic phases SKS and SPdKS. SPdKS is formed when SKS waves at distances around 107° are incident upon the CMB with a slowness that allows for coupling with diffracted P-waves at the base of the mantle. The P-wave diffraction occurs at both the SKS entrance and exit locations of the outer core. SP_dKS arrives slightly later in time than SKS, having a wave path through the mantle and core very close to SKS. The difference time between SKS and SP_dKS strongly depends on V_p at the base of the mantle near SK Score entrance and exit points. Observations from deep focus Fiji-Tonga events recorded by North American stations, and South American events recorded by European and Eurasian stations exhibit anomalously large SP_dKS - SKS difference times. SKS and the later arriving SP_dKS phase are separated by several seconds more than predictions made by 1-D reference models, such as the global average PREM [Dziewonski and Anderson, 1981] model. Models having a pronounced low-velocity zone (5%) in V_p in the bottom 50-100 km of the mantle predict the size of the observed SP_dK S-SKS anomalies. Raypath perturbations from lower mantle V_s structure may also be contributing to the observed anomalies.
Outer core structure is investigated using the family of SmKS (m=2,3,4) seismic waves. SmKS are waves that travel as S-waves in the mantle, P-waves in the core, and reflect (m-1) times on the underside of the CMB, and are well-suited for constraining outermost core V_p structure. This is due to closeness of the mantle paths and also the shallow depth range these waves travel in the outermost core. S3KS - S2KS and S4KS - S3KS differential travel times were measured using the cross-correlation method and compared to those from reflectivity synthetics created from core models of past studies. High quality recordings from a deep focus Java Sea event which sample the outer core beneath the northern Pacific, the Arctic, and northwestern North America (spanning 1/8th of the core's surface area), have SmKS wavepaths that traverse regions where lower mantle heterogeneity is pre- dieted small, and are well-modeled by the PREM core model, with possibly a small V_p decrease (1.5%) in the outermost 50 km of the core. Such a reduction implies chemical stratification in this 50 km zone, though this model feature is not uniquely resolved. Data having wave paths through areas of known D" heterogeneity (±2% and greater), such as the source-side of SmKS lower mantle paths from Fiji-Tonga to Eurasia and Africa, exhibit systematic SmKS differential time anomalies of up to several seconds. 2-D wave propagation experiments demonstrate how large scale lower mantle velocity perturbations can explain long wavelength behavior of such anomalous SmKS times. When improperly accounted for, lower mantle heterogeneity maps directly into core structure. Raypaths departing from homogeneity play an important role in producing SmKS anomalies. The existence of outermost core heterogeneity is difficult to resolve at present due to uncertainties in global lower mantle structure. Resolving a one-dimensional chemically stratified outermost core also remains difficult due to the same uncertainties. Restricting study to higher multiples of SmKS (m=2,3,4) can help reduce the affect of mantle heterogeneity due to the closeness of the mantle legs of the wavepaths. SmKS waves are ideal in providing additional information on the details of lower mantle heterogeneity.
Resumo:
Disorder and interactions both play crucial roles in quantum transport. Decades ago, Mott showed that electron-electron interactions can lead to insulating behavior in materials that conventional band theory predicts to be conducting. Soon thereafter, Anderson demonstrated that disorder can localize a quantum particle through the wave interference phenomenon of Anderson localization. Although interactions and disorder both separately induce insulating behavior, the interplay of these two ingredients is subtle and often leads to surprising behavior at the periphery of our current understanding. Modern experiments probe these phenomena in a variety of contexts (e.g. disordered superconductors, cold atoms, photonic waveguides, etc.); thus, theoretical and numerical advancements are urgently needed. In this thesis, we report progress on understanding two contexts in which the interplay of disorder and interactions is especially important.
The first is the so-called “dirty” or random boson problem. In the past decade, a strong-disorder renormalization group (SDRG) treatment by Altman, Kafri, Polkovnikov, and Refael has raised the possibility of a new unstable fixed point governing the superfluid-insulator transition in the one-dimensional dirty boson problem. This new critical behavior may take over from the weak-disorder criticality of Giamarchi and Schulz when disorder is sufficiently strong. We analytically determine the scaling of the superfluid susceptibility at the strong-disorder fixed point and connect our analysis to recent Monte Carlo simulations by Hrahsheh and Vojta. We then shift our attention to two dimensions and use a numerical implementation of the SDRG to locate the fixed point governing the superfluid-insulator transition there. We identify several universal properties of this transition, which are fully independent of the microscopic features of the disorder.
The second focus of this thesis is the interplay of localization and interactions in systems with high energy density (i.e., far from the usual low energy limit of condensed matter physics). Recent theoretical and numerical work indicates that localization can survive in this regime, provided that interactions are sufficiently weak. Stronger interactions can destroy localization, leading to a so-called many-body localization transition. This dynamical phase transition is relevant to questions of thermalization in isolated quantum systems: it separates a many-body localized phase, in which localization prevents transport and thermalization, from a conducting (“ergodic”) phase in which the usual assumptions of quantum statistical mechanics hold. Here, we present evidence that many-body localization also occurs in quasiperiodic systems that lack true disorder.
Resumo:
Part A
A problem restricting the development of the CuCl laser has been the decrease in output power with increases of tube temperature above 400°C. At that temperature the CuCl vapor pressure is about .1 torr. This is a small fraction of the buffer gas pressure (He at 10 torr).
The aim of the project was to measure the peak radiation temperature (assumed related to the mean energy of electrons) in the laser discharge as a function of the tube temperature. A 24 gHz gated microwave radiometer was used.
It was found that at the tube temperatures at which the output power began to deteriorate, the electron radiation temperature showed a sharp increase (compared with radiation temperature in pure buffer).
Using the above result, we have postulated that this sudden increase is a result of Penning ionization of the Cu atoms. As a consequence of this process the number of Cu atoms available for lasing decrease.
PART B
The aim of the project was to study the dissociation of CO2 in the glow discharge of flowing CO2 lasers.
A TM011 microwave (3 gHz) cavity was used to measure the radially averaged electron density ne and the electron-neutral collision frequency in the laser discharge. An estimate of the electric field is made from these two measurements. A gas chromatograph was used to measure the chemical composition of the gases after going through the discharge. This instrument was checked against a mass spectrometer for accuracy and sensitivity.
Several typical laser mixtures were .used: CO2-N2-He (1,3,16), (1,3,0), (1,0,16), (1,2,10), (1,2,0), (1,0,10), (2,3,15), (2,3,0), (2,0,15), (1,3,16)+ H2O and pure CO2. Results show that for the conditions studied the dissociation as a function of the electron density is uniquely determined by the STP partial flow rate of CO2, regardless of the amount of N2 and/or He present. The presence of water vapor in the discharge decreased the degree of dissociation.
A simple theoretical model was developed using thermodynamic equilibrium. The electrons were replaced in the calculations by a distributed heat source.
The results are analyzed with a simple kinetic model.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
From studies of protoplanetary disks to extrasolar planets and planetary debris, we aim to understand the full evolution of a planetary system. Observational constraints from ground- and space-based instrumentation allows us to measure the properties of objects near and far and are central to developing this understanding. We present here three observational campaigns that, when combined with theoretical models, reveal characteristics of different stages and remnants of planet formation. The Kuiper Belt provides evidence of chemical and dynamical activity that reveals clues to its primordial environment and subsequent evolution. Large samples of this population can only be assembled at optical wavelengths, with thermal measurements at infrared and sub-mm wavelengths currently available for only the largest and closest bodies. We measure the size and shape of one particular object precisely here, in hopes of better understanding its unique dynamical history and layered composition.
Molecular organic chemistry is one of the most fundamental and widespread facets of the universe, and plays a key role in planet formation. A host of carbon-containing molecules vibrationally emit in the near-infrared when excited by warm gas, T~1000 K. The NIRSPEC instrument at the W.M. Keck Observatory is uniquely configured to study large ranges of this wavelength region at high spectral resolution. Using this facility we present studies of warm CO gas in protoplanetary disks, with a new code for precise excitation modeling. A parameterized suite of models demonstrates the abilities of the code and matches observational constraints such as line strength and shape. We use the models to probe various disk parameters as well, which are easily extensible to others with known disk emission spectra such as water, carbon dioxide, acetylene, and hydrogen cyanide.
Lastly, the existence of molecules in extrasolar planets can also be studied with NIRSPEC and reveals a great deal about the evolution of the protoplanetary gas. The species we observe in protoplanetary disks are also often present in exoplanet atmospheres, and are abundant in Earth's atmosphere as well. Thus, a sophisticated telluric removal code is necessary to analyze these high dynamic range, high-resolution spectra. We present observations of a hot Jupiter, revealing water in its atmosphere and demonstrating a new technique for exoplanet mass determination and atmospheric characterization. We will also be applying this atmospheric removal code to the aforementioned disk observations, to improve our data analysis and probe less abundant species. Guiding models using observations is the only way to develop an accurate understanding of the timescales and processes involved. The futures of the modeling and of the observations are bright, and the end goal of realizing a unified model of planet formation will require both theory and data, from a diverse collection of sources.
Resumo:
Over the last several decades there have been significant advances in the study and understanding of light behavior in nanoscale geometries. Entire fields such as those based on photonic crystals, plasmonics and metamaterials have been developed, accelerating the growth of knowledge related to nanoscale light manipulation. Coupled with recent interest in cheap, reliable renewable energy, a new field has blossomed, that of nanophotonic solar cells.
In this thesis, we examine important properties of thin-film solar cells from a nanophotonics perspective. We identify key differences between nanophotonic devices and traditional, thick solar cells. We propose a new way of understanding and describing limits to light trapping and show that certain nanophotonic solar cell designs can have light trapping limits above the so called ray-optic or ergodic limit. We propose that a necessary requisite to exceed the traditional light trapping limit is that the active region of the solar cell must possess a local density of optical states (LDOS) higher than that of the corresponding, bulk material. Additionally, we show that in addition to having an increased density of states, the absorber must have an appropriate incoupling mechanism to transfer light from free space into the optical modes of the device. We outline a portfolio of new solar cell designs that have potential to exceed the traditional light trapping limit and numerically validate our predictions for select cases.
We emphasize the importance of thinking about light trapping in terms of maximizing the optical modes of the device and efficiently coupling light into them from free space. To further explore these two concepts, we optimize patterns of superlattices of air holes in thin slabs of Si and show that by adding a roughened incoupling layer the total absorbed current can be increased synergistically. We suggest that the addition of a random scattering surface to a periodic patterning can increase incoupling by lifting the constraint of selective mode occupation associated with periodic systems.
Lastly, through experiment and simulation, we investigate a potential high efficiency solar cell architecture that can be improved with the nanophotonic light trapping concepts described in this thesis. Optically thin GaAs solar cells are prepared by the epitaxial liftoff process by removal from their growth substrate and addition of a metallic back reflector. A process of depositing large area nano patterns on the surface of the cells is developed using nano imprint lithography and implemented on the thin GaAs cells.
Resumo:
The ability to sense mechanical force is vital to all organisms to interact with and respond to stimuli in their environment. Mechanosensation is critical to many physiological functions such as the senses of hearing and touch in animals, gravitropism in plants and osmoregulation in bacteria. Of these processes, the best understood at the molecular level involve bacterial mechanosensitive channels. Under hypo-osmotic stress, bacteria are able to alleviate turgor pressure through mechanosensitive channels that gate directly in response to tension in the membrane lipid bilayer. A key participant in this response is the mechanosensitive channel of large conductance (MscL), a non-selective channel with a high conductance of ~3 nS that gates at tensions close to the membrane lytic tension.
It has been appreciated since the original discovery by C. Kung that the small subunit size (~130 to 160 residues) and the high conductance necessitate that MscL forms a homo-oligomeric channel. Over the past 20 years of study, the proposed oligomeric state of MscL has ranged from monomer to hexamer. Oligomeric state has been shown to vary between MscL homologues and is influenced by lipid/detergent environment. In this thesis, we report the creation of a chimera library to systematically survey the correlation between MscL sequence and oligomeric state to identify the sequence determinants of oligomeric state. Our results demonstrate that although there is no combination of sequences uniquely associated with a given oligomeric state (or mixture of oligomeric states), there are significant correlations. In the quest to characterize the oligomeric state of MscL, an exciting discovery was made about the dynamic nature of the MscL complex. We found that in detergent solution, under mild heating conditions (37 °C – 60 °C), subunits of MscL can exchange between complexes, and the dynamics of this process are sensitive to the protein sequence.
Extensive efforts were made to produce high diffraction quality crystals of MscL for the determination of a high resolution X-ray crystal structure of a full length channel. The surface entropy reduction strategy was applied to the design of S. aureus MscL variants and while the strategy appears to have improved the crystallizability of S. aureus MscL, unfortunately the diffraction qualities of these crystals were not significantly improved. MscL chimeras were also screened for crystallization in various solubilization detergents, but also failed to yield high quality crystals.
MscL is a fascinating protein and continues to serve as a model system for the study of the structural and functional properties of mechanosensitive channels. Further characterization of the MscL chimera library will offer more insight into the characteristics of the channel. Of particular interest are the functional characterization of the chimeras and the exploration of the physiological relevance of intercomplex subunit exchange.
Resumo:
The need for sustainable energy production motivates the study of photovoltaic materials, which convert energy from sunlight directly into electricity. This work has focused on the development of Cu2O as an earth-abundant solar absorber due to the abundance of its constituent elements in the earth's crust, its suitable band gap, and its potential for low cost processing. Crystalline wafers of Cu2O with minority carrier diffusion lengths on the order of microns can be manufactured in a uniquely simple fashion — directly from copper foils by thermal oxidation. Furthermore, Cu2O has an optical band gap of 1.9 eV, which gives it a detailed balance energy conversion efficiency of 24.7% and the possibility for an independently connected Si/Cu2O dual junction with a detailed balance efficiency of 44.3%.
However, the highest energy conversion efficiency achieved in a photovoltaic device with a Cu2O absorber layer is currently only 5.38% despite the favorable optical and electronic properties listed above. There are several challenges to making a Cu2O photovoltaic device, including an inability to dope the material, its relatively low chemical stability compared to other oxides, and a lack of suitable heterojunction partners due to an unusually small electron affinity. We have addressed the low chemical stability, namely the fact that Cu2O is an especially reactive oxide due to its low enthalpy of formation (ΔHf0 = -168.7 kJ/mol), by developing a novel surface preparation technique. We have addressed the lack of suitable heterojunction partners by investigating the heterojunction band alignment of several Zn-VI materials with Cu2O. Finally, We have addressed the typically high series resistance of Cu2O wafers by developing methods to make very thin, bulk Cu2O, including devices on Cu2O wafers as thin as 20 microns. Using these methods we have been able to achieve photovoltages over 1 V, and have demonstrated the potential of a new heterojunction material, Zn(O,S).
Resumo:
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.
The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.
The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.
First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.
In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.
The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.
Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
Resumo:
Acetyltransferases and deacetylases catalyze the addition and removal, respectively, of acetyl groups to the epsilon-amino group of protein lysine residues. This modification can affect the function of a protein through several means, including the recruitment of specific binding partners called acetyl-lysine readers. Acetyltransferases, deacetylases, and acetyl-lysine readers have emerged as crucial regulators of biological processes and prominent targets for the treatment of human disease. This work describes a combination of structural, biochemical, biophysical, cell-biological, and organismal studies undertaken on a set of proteins that cumulatively include all steps of the acetylation process: the acetyltransferase MEC-17, the deacetylase SIRT1, and the acetyl-lysine reader DPF2. Tubulin acetylation by MEC-17 is associated with stable, long-lived microtubule structures. We determined the crystal structure of the catalytic domain of human MEC-17 in complex with the cofactor acetyl-CoA. The structure in combination with an extensive enzymatic analysis of MEC-17 mutants identified residues for cofactor and substrate recognition and activity. A large, evolutionarily conserved hydrophobic surface patch distal to the active site was shown to be necessary for catalysis, suggesting that specificity is achieved by interactions with the alpha-tubulin substrate that extend outside of the modified surface loop. Experiments in C. elegans showed that while MEC-17 is required for touch sensitivity, MEC-17 enzymatic activity is dispensible for this behavior. SIRT1 deacetylates a wide range of substrates, including p53, NF-kappaB, FOXO transcription factors, and PGC-1-alpha, with roles in cellular processes ranging from energy metabolism to cell survival. SIRT1 activity is uniquely controlled by a C-terminal regulatory segment (CTR). Here we present crystal structures of the catalytic domain of human SIRT1 in complex with the CTR in an apo form and in complex with a cofactor and a pseudo-substrate peptide. The catalytic domain adopts the canonical sirtuin fold. The CTR forms a beta-hairpin structure that complements the beta-sheet of the NAD^+-binding domain, covering an essentially invariant, hydrophobic surface. A comparison of the apo and cofactor bound structures revealed conformational changes throughout catalysis, including a rotation of a smaller subdomain with respect to the larger NAD^+-binding subdomain. A biochemical analysis identified key residues in the active site, an inhibitory role for the CTR, and distinct structural features of the CTR that mediate binding and inhibition of the SIRT1 catalytic domain. DPF2 represses myeloid differentiation in acute myelogenous leukemia. Finally, we solved the crystal structure of the tandem PHD domain of human DPF2. We showed that DPF2 preferentially binds H3 tail peptides acetylated at Lys14, and binds H4 tail peptides with no preference for acetylation state. Through a structural and mutational analysis we identify the molecular basis of histone recognition. We propose a model for the role of DPF2 in AML and identify the DPF2 tandem PHD finger domain as a promising novel target for anti-leukemia therapeutics.
Resumo:
Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ ᴝ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.
Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.
The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).
The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ ᴝ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.
Resumo:
In 1964 A. W. Goldie [1] posed the problem of determining all rings with identity and minimal condition on left ideals which are faithfully represented on the right side of their left socle. Goldie showed that such a ring which is indecomposable and in which the left and right principal indecomposable ideals have, respectively, unique left and unique right composition series is a complete blocked triangular matrix ring over a skewfield. The general problem suggested above is very difficult. We obtain results under certain natural restrictions which are much weaker than the restrictive assumptions made by Goldie.
We characterize those rings in which the principal indecomposable left ideals each contain a unique minimal left ideal (Theorem (4.2)). It is sufficient to handle indecomposable rings (Lemma (1.4)). Such a ring is also a blocked triangular matrix ring. There exist r positive integers K1,..., Kr such that the i,jth block of a typical matrix is a Ki x Kj matrix with arbitrary entries in a subgroup Dij of the additive group of a fixed skewfield D. Each Dii is a sub-skewfield of D and Dri = D for all i. Conversely, every matrix ring which has this form is indecomposable, faithfully represented on the right side of its left socle, and possesses the property that every principal indecomposable left ideal contains a unique minimal left ideal.
The principal indecomposable left ideals may have unique composition series even though the ring does not have minimal condition on right ideals. We characterize this situation by defining a partial ordering ρ on {i, 2,...,r} where we set iρj if Dij ≠ 0. Every principal indecomposable left ideal has a unique composition series if and only if the diagram of ρ is an inverted tree and every Dij is a one-dimensional left vector space over Dii (Theorem (5.4)).
We show (Theorem (2.2)) that every ring A of the type we are studying is a unique subdirect sum of less complex rings A1,...,As of the same type. Namely, each Ai has only one isomorphism class of minimal left ideals and the minimal left ideals of different Ai are non-isomorphic as left A-modules. We give (Theorem (2.1)) necessary and sufficient conditions for a ring which is a subdirect sum of rings Ai having these properties to be faithfully represented on the right side of its left socle. We show ((4.F), p. 42) that up to technical trivia the rings Ai are matrix rings of the form
[...]. Each Qj comes from the faithful irreducible matrix representation of a certain skewfield over a fixed skewfield D. The bottom row is filled in by arbitrary elements of D.
In Part V we construct an interesting class of rings faithfully represented on their left socle from a given partial ordering on a finite set, given skewfields, and given additive groups. This class of rings contains the ones in which every principal indecomposable left ideal has a unique minimal left ideal. We identify the uniquely determined subdirect summands mentioned above in terms of the given partial ordering (Proposition (5.2)). We conjecture that this technique serves to construct all the rings which are a unique subdirect sum of rings each having the property that every principal-indecomposable left ideal contains a unique minimal left ideal.
Resumo:
Systems-level studies of biological systems rely on observations taken at a resolution lower than the essential unit of biology, the cell. Recent technical advances in DNA sequencing have enabled measurements of the transcriptomes in single cells excised from their environment, but it remains a daunting technical problem to reconstruct in situ gene expression patterns from sequencing data. In this thesis I develop methods for the routine, quantitative in situ measurement of gene expression using fluorescence microscopy.
The number of molecular species that can be measured simultaneously by fluorescence microscopy is limited by the pallet of spectrally distinct fluorophores. Thus, fluorescence microscopy is traditionally limited to the simultaneous measurement of only five labeled biomolecules at a time. The two methods described in this thesis, super-resolution barcoding and temporal barcoding, represent strategies for overcoming this limitation to monitor expression of many genes in a single cell. Super-resolution barcoding employs optical super-resolution microscopy (SRM) and combinatorial labeling via-smFISH (single molecule fluorescence in situ hybridization) to uniquely label individual mRNA species with distinct barcodes resolvable at nanometer resolution. This method dramatically increases the optical space in a cell, allowing a large numbers of barcodes to be visualized simultaneously. As a proof of principle this technology was used to study the S. cerevisiae calcium stress response. The second method, sequential barcoding, reads out a temporal barcode through multiple rounds of oligonucleotide hybridization to the same mRNA. The multiplexing capacity of sequential barcoding increases exponentially with the number of rounds of hybridization, allowing over a hundred genes to be profiled in only a few rounds of hybridization.
The utility of sequential barcoding was further demonstrated by adapting this method to study gene expression in mammalian tissues. Mammalian tissues suffer both from a large amount of auto-fluorescence and light scattering, making detection of smFISH probes on mRNA difficult. An amplified single molecule detection technology, smHCR (single molecule hairpin chain reaction), was developed to allow for the quantification of mRNA in tissue. This technology is demonstrated in combination with light sheet microscopy and background reducing tissue clearing technology, enabling whole-organ sequential barcoding to monitor in situ gene expression directly in intact mammalian tissue.
The methods presented in this thesis, specifically sequential barcoding and smHCR, enable multiplexed transcriptional observations in any tissue of interest. These technologies will serve as a general platform for future transcriptomic studies of complex tissues.