7 resultados para pacs: human aspacts of it
em CaltechTHESIS
Resumo:
The SCF ubiquitin ligase complex of budding yeast triggers DNA replication by cata lyzi ng ubiquitination of the S phase CDK inhibitor SIC1. SCF is composed of several evolutionarily conserved proteins, including ySKP1, CDC53 (Cullin), and the F-box protein CDC4. We isolated hSKP1 in a two-hybrid screen with hCUL1, the human homologue of CDC53. We showed that hCUL1 associates with hSKP1 in vivo and directly interacts with hSKP1 and the human F-box protein SKP2 in vitro, forming an SCF-Iike particle. Moreover, hCUL1 complements the growth defect of yeast CDC53^(ts) mutants, associates with ubiquitination-promoting activity in human cell extracts, and can assemble into functional, chimeric ubiquitin ligase complexes with yeast SCF components. These data demonstrated that hCUL1 functions as part of an SCF ubiquitin ligase complex in human cells. However, purified human SCF complexes consisting of CUL1, SKP1, and SKP2 are inactive in vitro, suggesting that additional factors are required.
Subsequently, mammalian SCF ubiquitin ligases were shown to regulate various physiological processes by targeting important cellular regulators, like lĸBα, β-catenin, and p27, for ubiquitin-dependent proteolysis by the 26S proteasome. Little, however, is known about the regulation of various SCF complexes. By using sequential immunoaffinity purification and mass spectrometry, we identified proteins that interact with human SCF components SKP2 and CUL1 in vivo. Among them we identified two additional SCF subunits: HRT1, present in all SCF complexes, and CKS1, that binds to SKP2 and is likely to be a subunit of SCF5^(SKP2) complexes. Subsequent work by others demonstrated that these proteins are essential for SCF activity. We also discovered that COP9 Signalosome (CSN), previously described in plants as a suppressor of photomorphogenesis, associates with CUL1 and other SCF subunits in vivo. This interaction is evolutionarily conserved and is also observed with other Cullins, suggesting that all Cullin based ubiquitin ligases are regulated by CSN. CSN regulates Cullin Neddylation presumably through CSNS/JAB1, a stochiometric Signalosome subunit and a putative deneddylating enzyme. This work sheds light onto an intricate connection that exists between signal transduction pathways and protein degradation machinery inside the cell and sets stage for gaining further insights into regulation of protein degradation.
Resumo:
The main focus of this thesis is the use of high-throughput sequencing technologies in functional genomics (in particular in the form of ChIP-seq, chromatin immunoprecipitation coupled with sequencing, and RNA-seq) and the study of the structure and regulation of transcriptomes. Some parts of it are of a more methodological nature while others describe the application of these functional genomic tools to address various biological problems. A significant part of the research presented here was conducted as part of the ENCODE (ENCyclopedia Of DNA Elements) Project.
The first part of the thesis focuses on the structure and diversity of the human transcriptome. Chapter 1 contains an analysis of the diversity of the human polyadenylated transcriptome based on RNA-seq data generated for the ENCODE Project. Chapter 2 presents a simulation-based examination of the performance of some of the most popular computational tools used to assemble and quantify transcriptomes. Chapter 3 includes a study of variation in gene expression, alternative splicing and allelic expression bias on the single-cell level and on a genome-wide scale in human lymphoblastoid cells; it also brings forward a number of critical to the practice of single-cell RNA-seq measurements methodological considerations.
The second part presents several studies applying functional genomic tools to the study of the regulatory biology of organellar genomes, primarily in mammals but also in plants. Chapter 5 contains an analysis of the occupancy of the human mitochondrial genome by TFAM, an important structural and regulatory protein in mitochondria, using ChIP-seq. In Chapter 6, the mitochondrial DNA occupancy of the TFB2M transcriptional regulator, the MTERF termination factor, and the mitochondrial RNA and DNA polymerases is characterized. Chapter 7 consists of an investigation into the curious phenomenon of the physical association of nuclear transcription factors with mitochondrial DNA, based on the diverse collections of transcription factor ChIP-seq datasets generated by the ENCODE, mouseENCODE and modENCODE consortia. In Chapter 8 this line of research is further extended to existing publicly available ChIP-seq datasets in plants and their mitochondrial and plastid genomes.
The third part is dedicated to the analytical and experimental practice of ChIP-seq. As part of the ENCODE Project, a set of metrics for assessing the quality of ChIP-seq experiments was developed, and the results of this activity are presented in Chapter 9. These metrics were later used to carry out a global analysis of ChIP-seq quality in the published literature (Chapter 10). In Chapter 11, the development and initial application of an automated robotic ChIP-seq (in which these metrics also played a major role) is presented.
The fourth part presents the results of some additional projects the author has been involved in, including the study of the role of the Piwi protein in the transcriptional regulation of transposon expression in Drosophila (Chapter 12), and the use of single-cell RNA-seq to characterize the heterogeneity of gene expression during cellular reprogramming (Chapter 13).
The last part of the thesis provides a review of the results of the ENCODE Project and the interpretation of the complexity of the biochemical activity exhibited by mammalian genomes that they have revealed (Chapters 15 and 16), an overview of the expected in the near future technical developments and their impact on the field of functional genomics (Chapter 14), and a discussion of some so far insufficiently explored research areas, the future study of which will, in the opinion of the author, provide deep insights into many fundamental but not yet completely answered questions about the transcriptional biology of eukaryotes and its regulation.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.
We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.
In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.
In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.
The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
This thesis is divided into three chapters. In the first chapter we study the smooth sets with respect to a Borel equivalence realtion E on a Polish space X. The collection of smooth sets forms σ-ideal. We think of smooth sets as analogs of countable sets and we show that an analog of the perfect set theorem for Σ11 sets holds in the context of smooth sets. We also show that the collection of Σ11 smooth sets is ∏11 on the codes. The analogs of thin sets are called sparse sets. We prove that there is a largest ∏11 sparse set and we give a characterization of it. We show that in L there is a ∏11 sparse set which is not smooth. These results are analogs of the results known for the ideal of countable sets, but it remains open to determine if large cardinal axioms imply that ∏11 sparse sets are smooth. Some more specific results are proved for the case of a countable Borel equivalence relation. We also study I(E), the σ-ideal of closed E-smooth sets. Among other things we prove that E is smooth iff I(E) is Borel.
In chapter 2 we study σ-ideals of compact sets. We are interested in the relationship between some descriptive set theoretic properties like thinness, strong calibration and the covering property. We also study products of σ-ideals from the same point of view. In chapter 3 we show that if a σ-ideal I has the covering property (which is an abstract version of the perfect set theorem for Σ11 sets), then there is a largest ∏11 set in Iint (i.e., every closed subset of it is in I). For σ-ideals on 2ω we present a characterization of this set in a similar way as for C1, the largest thin ∏11 set. As a corollary we get that if there are only countable many reals in L, then the covering property holds for Σ12 sets.
Resumo:
The microwave response of the superconducting state in equilibrium and non-equilibrium configurations was examined experimentally and analytically. Thin film superconductors were mostly studied in order to explore spatial effects. The response parameter measured was the surface impedance.
For small microwave intensity the surface impedance at 10 GHz was measured for a variety of samples (mostly Sn) over a wide range of sample thickness and temperature. A detailed analysis based on the BCS theory was developed for calculating the surface impedance for general thickness and other experimental parameters. Experiment and theory agreed with each other to within the experimental accuracy. Thus it was established that the samples, thin films as well as bulk, were well characterised at low microwave powers (near equilibrium).
Thin films were perturbed by a small dc supercurrent and the effect on the superconducting order parameter and the quasiparticle response determined by measuring changes in the surface resistance (still at low microwave intensity and independent of it) due to the induced current. The use of fully superconducting resonators enabled the measurement of very small changes in the surface resistance (< 10-9 Ω/sq.). These experiments yield information regarding the dynamics of the order parameter and quasiparticle systems. For all the films studied the results could be described at temperatures near Tc by the thermodynamic depression of the order parameter due to the static current leading to a quadratic increase of the surface resistance with current.
For the thinnest films the low temperature results were surprising in that the surface resistance decreased with increasing current. An explanation is proposed according to which this decrease occurs due to an additional high frequency quasiparticle current caused by the combined presence of both static and high frequency fields. For frequencies larger than the inverse of the quasiparticle relaxation time this additional current is out of phase (by π) with the microwave electric field and is observed as a decrease of surface resistance. Calculations agree quantitatively with experimental results. This is the first observation and explanation of this non-equilibrium quasiparticle effect.
For thicker films of Sn, the low temperature surface resistance was found to increase with applied static current. It is proposed that due to the spatial non-uniformity of the induced current distribution across the thicker films, the above purely temporal analysis of the local quasiparticle response needs to be generalised to include space and time non-equilibrium effects.
The nonlinear interaction of microwaves arid superconducting films was also examined in a third set of experiments. The surface impedance of thin films was measured as a function of the incident microwave magnetic field. The experiments exploit the ability to measure the absorbed microwave power and applied microwave magnetic field absolutely. It was found that the applied surface microwave field could not be raised above a certain threshold level at which the absorption increased abruptly. This critical field level represents a dynamic critical field and was found to be associated with the penetration of the app1ied field into the film at values well below the thermodynamic critical field for the configuration of a field applied to one side of the film. The penetration occurs despite the thermal stability of the film which was unequivocally demonstrated by experiment. A new mechanism for such penetration via the formation of a vortex-antivortex pair is proposed. The experimental results for the thinnest films agreed with the calculated values of this pair generation field. The observations of increased transmission at the critical field level and suppression of the process by a metallic ground plane further support the proposed model.