27 resultados para Generalized function

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Edge Function method formerly developed by Quinlan(25) is applied to solve the problem of thin elastic plates resting on spring supported foundations subjected to lateral loads the method can be applied to plates of any convex polygonal shapes, however, since most plates are rectangular in shape, this specific class is investigated in this thesis. The method discussed can also be applied easily to other kinds of foundation models (e.g. springs connected to each other by a membrane) as long as the resulting differential equation is linear. In chapter VII, solution of a specific problem is compared with a known solution from literature. In chapter VIII, further comparisons are given. The problems of concentrated load on an edge and later on a corner of a plate as long as they are far away from other boundaries are also given in the chapter and generalized to other loading intensities and/or plates springs constants for Poisson's ratio equal to 0.2

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of prophage integration by phage λ and the function and structure of the chromosomal elements required for λ integration have been studied with the use of λ deletion mutants. Since attφ, the substrate of the integration enzymes, is not essential for λ growth, and since attφ resides in a portion of the λ chromosome which is not necessary for vegetative growth, viable λ deletion mutants were isolated and examined to dissect the structure of attφ.

Deletion mutants were selected from wild type populations by treating the phage under conditions where phage are inactivated at a rate dependent on the DNA content of the particles. A number of deletion mutants were obtained in this way, and many of these mutants proved to have defects in integration. These defects were defined by analyzing the properties of Int-promoted recombination in these att mutants.

The types of mutants found and their properties indicated that attφ has three components: a cross-over point which is bordered on either side by recognition elements whose sequence is specifically required for normal integration. The interactions of the recognition elements in Int-promoted recombination between att mutants was examined and proved to be quite complex. In general, however, it appears that the λ integration system can function with a diverse array of mutant att sites.

The structure of attφ was examined by comparing the genetic properties of various att mutants with their location in the λ chromosome. To map these mutants, the techniques of heteroduplex DNA formation and electron microscopy were employed. It was found that integration cross-overs occur at only one point in attφ and that the recognition sequences that direct the integration enzymes to their site of action are quite small, less than 2000 nucleotides each. Furthermore, no base pair homology was detected between attφ and its bacterial analog, attB. This result clearly demonstrates that λ integration can occur between chromosomes which have little, if any, homology. In this respect, λ integration is unique as a system of recombination since most forms of generalized recombination require extensive base pair homology.

An additional study on the genetic and physical distances in the left arm of the λ genome was described. Here, a large number of conditional lethal nonsense mutants were isolated and mapped, and a genetic map of the entire left arm, comprising a total of 18 genes, was constructed. Four of these genes were discovered in this study. A series of λdg transducing phages was mapped by heteroduplex electron microscopy and the relationship between physical and genetic distances in the left arm was determined. The results indicate that recombination frequency in the left arm is an accurate reflection of physical distances, and moreover, there do not appear to be any undiscovered genes in this segment of the genome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let E be a compact subset of the n-dimensional unit cube, 1n, and let C be a collection of convex bodies, all of positive n-dimensional Lebesgue measure, such that C contains bodies with arbitrarily small measure. The dimension of E with respect to the covering class C is defined to be the number

dC(E) = sup(β:Hβ, C(E) > 0),

where Hβ, C is the outer measure

inf(Ʃm(Ci)β:UCi E, Ci ϵ C) .

Only the one and two-dimensional cases are studied. Moreover, the covering classes considered are those consisting of intervals and rectangles, parallel to the coordinate axes, and those closed under translations. A covering class is identified with a set of points in the left-open portion, 1’n, of 1n, whose closure intersects 1n - 1’n. For n = 2, the outer measure Hβ, C is adopted in place of the usual:

Inf(Ʃ(diam. (Ci))β: UCi E, Ci ϵ C),

for the purpose of studying the influence of the shape of the covering sets on the dimension dC(E).

If E is a closed set in 11, let M(E) be the class of all non-decreasing functions μ(x), supported on E with μ(x) = 0, x ≤ 0 and μ(x) = 1, x ≥ 1. Define for each μ ϵ M(E),

dC(μ) = lim/c → inf/0 log ∆μ(c)/log c , (c ϵ C)

where ∆μ(c) = v/x (μ(x+c) – μ(x)). It is shown that

dC(E) = sup (dC(μ):μ ϵ M(E)).

This notion of dimension is extended to a certain class Ӻ of sub-additive functions, and the problem of studying the behavior of dC(E) as a function of the covering class C is reduced to the study of dC(f) where f ϵ Ӻ. Specifically, the set of points in 11,

(*) {dB(F), dC(f)): f ϵ Ӻ}

is characterized by a comparison of the relative positions of the points of B and C. A region of the form (*) is always closed and doubly-starred with respect to the points (0, 0) and (1, 1). Conversely, given any closed region in 12, doubly-starred with respect to (0, 0) and (1, 1), there are covering classes B and C such that (*) is exactly that region. All of the results are shown to apply to the dimension of closed sets E. Similar results can be obtained when a finite number of covering classes are considered.

In two dimensions, the notion of dimension is extended to the class M, of functions f(x, y), non-decreasing in x and y, supported on 12 with f(x, y) = 0 for x · y = 0 and f(1, 1) = 1, by the formula

dC(f) = lim/s · t → inf/0 log ∆f(s, t)/log s · t , (s, t) ϵ C

where

∆f(s, t) = V/x, y (f(x+s, y+t) – f(x+s, y) – f(x, y+t) + f(x, t)).

A characterization of the equivalence dC1(f) = dC2(f) for all f ϵ M, is given by comparison of the gaps in the sets of products s · t and quotients s/t, (s, t) ϵ Ci (I = 1, 2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.

A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.

In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.

The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.

The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The author has constructed a synthetic gene for ∝-lytic protease. Since the DNA sequence of the protein is not known, the gene was designed by using the reverse translation of ∝-lytic protease's amino acid sequence. Unique restriction sites are carefully sought in the degenerate DNA sequence to aid in future mutagenesis studies. The unique restriction sites are designed approximately 50 base pairs apart and their appropriate codons used in the DNA sequence. The codons used to construct the DNA sequence of ∝-lytic protease are preferred codons in E-coli or used in the production of β-lactamase. Codon usage is also distributed evenly to ensure that one particular codon is not heavily used. The gene is essentially constructed from the outside in. The gene is built in a stepwise fashion using plasmids as the vehicles for the ∝-lytic oligomers. The use of plasmids allows the replication and isolation of large quantities of the intermediates during gene synthesis. The ∝-lytic DNA is a double-stranded oligomer that has sufficient overhang and sticky ends to anneal correctly in the vector. After six steps of incorporating ∝-lytic DNA, the gene is completed and sequenced to ensure that the correct DNA sequence is present and that no mutations occurred in the structural gene.

β-lactamase is the other serine hydrolase studied in this thesis. The author used the class A RTEM-1 β- lactamase encoded on the plasmid pBR322 to investigate the roll of the conserved threonine residue at position 71. Cassette mutagenesis was previously used to generate all possible amino acid substitutions at position 71. The work presented here describes the purification and kinetic characterization of a T71H mutant previously constructed by S. Schultz. The mutated gene was transferred into plasmid pJN for expression and induced with IPTG. The enzyme is purified by column chromatography and FPLC to homogeneity. Kinetic studies reveal that the mutant has lower k_(cat) values on benzylpenicillin, cephalothin and 6-aminopenicillanic acid but no changes in k_m except for cephalothin which is approximately 4 times higher. The mutant did not change siginificantly in its pH profile compared to the wild-type enzyme. Also, the mutant is more sensitive to thermal denaturation as compared to the wild-type enzyme. However, experimental evidence indicates that the probable generation of a positive charge at position 71 thermally stabilized the mutant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.

To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.

In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During inflammation and infection, hematopoietic stem and progenitor cells (HSPCs) are stimulated to proliferate and differentiate into mature immune cells, especially of the myeloid lineage. MicroRNA-146a (miR-146a) is a critical negative regulator of inflammation. Deletion of the gene encoding miR-146a—expressed in all blood cell types—produces effects that appear as dysregulated inflammatory hematopoiesis, leading to a decline in the number and quality of hematopoietic stem cells (HSCs), excessive myeloproliferation, and, ultimately, to exhaustion of the HSCs and hematopoietic neoplasms. Six-week-old deleted mice are normal, with no effect on cell numbers, but by 4 months bone marrow hypercellularity can be seen, and by 8 months marrow exhaustion is becoming evident. The ability of HSCs to replenish the entire hematopoietic repertoire in a myelo-ablated mouse also declines precipitously as miR-146a-deficient mice age. In the absence of miR-146a, LPS-mediated serial inflammatory stimulation accelerates the effects of aging. This chronic inflammatory stress on HSCs in deleted mice involves a molecular axis consisting of upregulation of the signaling protein TRAF6 leading to excessive activity of the transcription factor NF-κB and overproduction of the cytokine IL-6. At the cellular level, transplant studies show that the defects are attributable to both an intrinsic problem in the miR-146a-deficient HSCs and extrinsic effects of miR-146a-deficient lymphocytes and non-hematopoietic cells. This study has identified a microRNA, miR-146a, to be a critical regulator of HSC homeostasis during chronic inflammatory challenge in mice and has provided a molecular connection between chronic inflammation and the development of bone marrow failure and myeloproliferative neoplasms. This may have implications for human hematopoietic malignancies, such as myelodysplastic syndrome, which frequently displays downregulated miR-146a expression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The SCF ubiquitin ligase complex of budding yeast triggers DNA replication by cata lyzi ng ubiquitination of the S phase CDK inhibitor SIC1. SCF is composed of several evolutionarily conserved proteins, including ySKP1, CDC53 (Cullin), and the F-box protein CDC4. We isolated hSKP1 in a two-hybrid screen with hCUL1, the human homologue of CDC53. We showed that hCUL1 associates with hSKP1 in vivo and directly interacts with hSKP1 and the human F-box protein SKP2 in vitro, forming an SCF-Iike particle. Moreover, hCUL1 complements the growth defect of yeast CDC53^(ts) mutants, associates with ubiquitination-promoting activity in human cell extracts, and can assemble into functional, chimeric ubiquitin ligase complexes with yeast SCF components. These data demonstrated that hCUL1 functions as part of an SCF ubiquitin ligase complex in human cells. However, purified human SCF complexes consisting of CUL1, SKP1, and SKP2 are inactive in vitro, suggesting that additional factors are required.

Subsequently, mammalian SCF ubiquitin ligases were shown to regulate various physiological processes by targeting important cellular regulators, like lĸBα, β-catenin, and p27, for ubiquitin-dependent proteolysis by the 26S proteasome. Little, however, is known about the regulation of various SCF complexes. By using sequential immunoaffinity purification and mass spectrometry, we identified proteins that interact with human SCF components SKP2 and CUL1 in vivo. Among them we identified two additional SCF subunits: HRT1, present in all SCF complexes, and CKS1, that binds to SKP2 and is likely to be a subunit of SCF5^(SKP2) complexes. Subsequent work by others demonstrated that these proteins are essential for SCF activity. We also discovered that COP9 Signalosome (CSN), previously described in plants as a suppressor of photomorphogenesis, associates with CUL1 and other SCF subunits in vivo. This interaction is evolutionarily conserved and is also observed with other Cullins, suggesting that all Cullin based ubiquitin ligases are regulated by CSN. CSN regulates Cullin Neddylation presumably through CSNS/JAB1, a stochiometric Signalosome subunit and a putative deneddylating enzyme. This work sheds light onto an intricate connection that exists between signal transduction pathways and protein degradation machinery inside the cell and sets stage for gaining further insights into regulation of protein degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.

However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.

Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.

Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.

Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.

In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.

The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spin dependent cross sections, σT1/2 and σT3/2 , and asymmetries, A and A for 3He have been measured at the Jefferson Lab's Hall A facility. The inclusive scattering process 3He(e,e)X was performed for initial beam energies ranging from 0.86 to 5.1 GeV, at a scattering angle of 15.5°. Data includes measurements from the quasielastic peak, resonance region, and the deep inelastic regime. An approximation for the extended Gerasimov-Drell-Hearn integral is presented at a 4-momentum transfer Q2 of 0.2-1.0 GeV2.

Also presented are results on the performance of the polarized 3He target. Polarization of 3He was achieved by the process of spin-exchange collisions with optically pumped rubidium vapor. The 3He polarization was monitored using the NMR technique of adiabatic fast passage (AFP). The average target polarization was approximately 35% and was determined to have a systematic uncertainty of roughly ±4% relative.