18 resultados para combinatorial auction
Resumo:
Interleukin-2 is one of the lymphokines secreted by T helper type 1 cells upon activation mediated by T-cell receptor (TCR) and accessory molecules. The ability to express IL-2 is correlated with T-lineage commitment and is regulated during T cell development and differentiation. Understanding the molecular mechanism of how IL-2 gene inducibility is controlled at each transition and each differentiation process of T-cell development is to understand one aspect of T-cell development. In the present study, we first attempted to elucidate the molecular basis for the developmental changes of IL-2 gene inducibility. We showed that IL-2 gene inducibility is acquired early in immature CD4- CD8-TCR- thymocytes prior to TCR gene rearrangement. Similar to mature T cells, a complete set of transcription factors can be induced at this early stage to activate IL-2 gene expression. The progression of these cells to cortical CD4^+CD8^+TCR^(1o) cells is accompanied by the loss of IL-2 gene inducibility. We demonstrated that DNA binding activities of two transcription factors AP-1 and NF-AT are reduced in cells at this stage. Further, the loss of factor binding, especially AP-1, is attributable to the reduced ability to activate expression of three potential components of AP-1 and NF-AT, including c-Fos, FosB, and Fra-2. We next examined the interaction of transcription factors and the IL-2 promoter in vivo by using the EL4 T cell line and two non-T cell lines. We showed an all-or-none phenomenon regarding the factor-DNA interaction, i.e., in activated T cells, the IL-2 promoter is occupied by sequence-specific transcription factors when all the transcription factors are available; in resting T cells or non-T cells, no specific protein-DNA interaction is observed when only a subset of factors are present in the nuclei. Purposefully reducing a particular set of factor binding activities in stimulated T cells using pharmacological agents cyclosporin A or forskolin also abolished all interactions. The results suggest that a combinatorial and coordinated protein-DNA interaction is required for IL-2 gene activation. The thymocyte experiments clearly illustrated that multiple transcription factors are regulated during intrathymic T-cell development, and this regulation in tum controls the inducibility of the lineage-specific IL-2 gene. The in vivo study of protein-DNA interaction stressed the combinatorial action of transcription factors to stably occupy the IL-2 promoter and to initiate its transcription, and provided a molecular mechanism for changes in IL-2 gene inducibility in T cells undergoing integration of multiple environmental signals.
Resumo:
This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.
Resumo:
This thesis deals with two problems. The first is the determination of λ-designs, combinatorial configurations which are essentially symmetric block designs with the condition that each subset be of the same cardinality negated. We construct an infinite family of such designs from symmetric block designs and obtain some basic results about their structure. These results enable us to solve the problem for λ = 3 and λ = 4. The second problem deals with configurations related to both λ -designs and (ѵ, k, λ)-configurations. We have (n-1) k-subsets of {1, 2, ..., n}, S1, ..., Sn-1 such that Si ∩ Sj is a λ-set for i ≠ j. We obtain specifically the replication numbers of such a design in terms of n, k, and λ with one exceptional class which we determine explicitly. In certain special cases we settle the problem entirely.
Resumo:
Let L be a finite geometric lattice of dimension n, and let w(k) denote the number of elements in L of rank k. Two theorems about the numbers w(k) are proved: first, w(k) ≥ w(1) for k = 2, 3, ..., n-1. Second, w(k) = w(1) if and only if k = n-1 and L is modular. Several corollaries concerning the "matching" of points and dual points are derived from these theorems.
Both theorems can be regarded as a generalization of a theorem of de Bruijn and Erdös concerning ʎ= 1 designs. The second can also be considered as the converse to a special case of Dilworth's theorem on finite modular lattices.
These results are related to two conjectures due to G. -C. Rota. The "unimodality" conjecture states that the w(k)'s form a unimodal sequence. The "Sperner" conjecture states that a set of non-comparable elements in L has cardinality at most max/k {w(k)}. In this thesis, a counterexample to the Sperner conjecture is exhibited.
Resumo:
There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.
The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.
The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.
Resumo:
Hematopoiesis is a well-established system used to study developmental choices amongst cells with multiple lineage potentials, as well as the transcription factor network interactions that drive these developmental paths. Multipotent progenitors travel from the bone marrow to the thymus where T-cell development is initiated and these early T-cell precursors retain lineage plasticity even after initiating a T-cell program. The development of these early cells is driven by Notch signaling and the combinatorial expression of many transcription factors, several of which are also involved in the development of other cell lineages. The ETS family transcription factor PU.1 is involved in the development of progenitor, myeloid, and lymphoid cells, and can divert progenitor T-cells from the T-lineage to a myeloid lineage. This diversion of early T-cells by PU.1 can be blocked by Notch signaling. The PU.1 and Notch interaction creates a switch wherein PU.1 in the presence of Notch promotes T-cell identity and PU.1 in the absence of Notch signaling promotes a myeloid identity. Here we characterized an early T-cell cell line, Scid.adh.2c2, as a good model system for studying the myeloid vs. lymphoid developmental choice dependent on PU.1 and Notch signaling. We then used the Scid.adh.2c2 system to identify mechanisms mediating PU.1 and Notch signaling interactions during early T-cell development. We show that the mechanism by which Notch signaling is protecting pro-T cells is neither degradation nor modification of the PU.1 protein. Instead we give evidence that Notch signaling is blocking the PU.1-driven inhibition of a key set of T-regulatory genes including Myb, Tcf7, and Gata3. We show that the protection of Gata3 from PU.1-mediated inhibition, by Notch signaling and Myb, is important for retaining a T-lineage identity. We also discuss a PU.1-driven mechanism involving E-protein inhibition that leads to the inhibition of Notch target genes. This is mechanism may be used as a lockdown mechanism in pro-T-cells that have made the decision to divert to the myeloid pathway.
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
β-lactamases are a group of enzymes that confer resistance to penam and cephem antibiotics by hydrolysis of the β-lactam ring, thereby inactivating the antibiotic. Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Asp 132, a strictly conserved residue among the class A β-lactamases, appears to be involved in substrate binding, catalysis, or both. To study the contribution of residue 132 to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at position 132. Phenotypic screening of all mutants indicated that position 132 is very sensitive to amino acid changes, with only N132C, N132D, N132E, and N132Q showing any appreciable activity. Kinetic analysis of three of these mutants showed increases in K_M, along with substantial decreases in k_(cat). Efforts to trap a stable acyl-enzyme intermediate were unsuccessfuL These results indicate that residue 132 is involved in substrate binding, as well as catalysis, and supports the involvement of this residue in acylation as suggested by Strynadka et al.
Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Lys 73 and Glu 166, two strictly conserved residues among the class A β-lactamases, appear to be involved in substrate binding, catalysis, or both. To study the contribution of these residues to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at positions 73 and 166. Then all 400 possible combinations of mutants were created by combinatorial mutagenesis. The colonies harboring the mutants were screened for growth in the presence of ampicillin. The competent colonys' DNA were sequenced, and kinetic parameters investigated. It was found that lysine is essential at position 73, and that position 166 only tolerated fairly conservative changes (Aspartic acid, Histidine, and Tyrosine). These functional mutants exhibited decreased kcat's, but K_M was close to wild-type levels. The results of the combinatorial mutagenesis experiments indicate that Lysis absolutely required for activity at position 73; no mutation at residue 166 can compensate for loss of the long side chain amine. The active mutants found--K73K/E166D, K73KIE166H, and K73KIE166Y were studied by kinetic analysis. These results reaffirmed the function of residue 166 as important in catalysis, specifically deacylation.
The identity of the residue responsible for enhancing the active site serine (Ser 70) in RTEM-1 β-lactamase has been disputed for some time. Recently, analysis of a crystal structure of RTEM-1 β-lactamase with covalently bound intermediate was published, and it was suggested that Lys 73, a strictly conserved residue among the class A β-lactamases, was acting as a general base, activating Ser 70. For this to be possible, the pK_a of Lys 73 would have to be depressed significantly. In an attempt to assay the pK_a of Lys 73, the mutation K73C was made. This mutant protein can be reacted with 2-bromoethylamine, and activity is restored to near wild type levels. ^(15)N-2-bromoethylamine hydrobromide and ^(13)C-2-bromoethylamine hydrobromide were synthesized. Reacting these compounds with the K73C mutant gives stable isotopic enrichment at residue 73 in the form of aminoethylcysteine, a lysine homologue. The pK_a of an amine can be determined by NMR titration, following the change in chemical shift of either the ^(15)N-amine nuclei or adjacent Be nuclei as pH is changed. Unfortunately, low protein solubility, along with probable label scrambling in the Be experiment, did not permit direct observation of either the ^(15)N or ^(13)C signals. Indirect detection experiments were used to observe the protons bonded directly to the ^(13)C atoms. Two NMR signals were seen, and their chemical shift change with pH variation was noted. The peak which was determined to correspond to the aminoethylcysteine residue shifted from 3.2 ppm down to 2.8 ppm over a pH range of 6.6 to 12.5. The pK_a of the amine at position 73 was determined to be ~10. This indicates that residue 73 does not function as a general base in the acylation step of the reaction. However the experimental measurement takes place in the absence of substrate. Since the enzyme undergoes conformational changes upon substrate binding, the measured pK_a of the free enzyme may not correspond to the pK_a of the enzyme substrate complex.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
This thesis presents the development of chip-based technology for informative in vitro cancer diagnostics. In the first part of this thesis, I will present my contribution in the development of a technology called “Nucleic Acid Cell Sorting (NACS)”, based on microarrays composed of nucleic acid encoded peptide major histocompatibility complexes (p/MHC), and the experimental and theoretical methods to detect and analyze secreted proteins from single or few cells.
Secondly, a novel portable platform for imaging of cellular metabolism with radio probes is presented. A microfluidic chip, so called “Radiopharmaceutical Imaging Chip” (RIMChip), combined with a beta-particle imaging camera, is developed to visualize the uptake of radio probes in a small number of cells. Due to its sophisticated design, RIMChip allows robust and user-friendly execution of sensitive and quantitative radio assays. The performance of this platform is validated with adherent and suspension cancer cell lines. This platform is then applied to study the metabolic response of cancer cells under the treatment of drugs. Both cases of mouse lymphoma and human glioblastoma cell lines, the metabolic responses to the drug exposures are observed within a short time (~ 1 hour), and are correlated with the arrest of cell-cycle, or with changes in receptor tyrosine kinase signaling.
The last parts of this thesis present summaries of ongoing projects: development of a new agent as an in vivo imaging probe for c-MET, and quantitative monitoring of glycolytic metabolism of primary glioblastoma cells. To develop a new agent for c-MET imaging, the one-bead-one-compound combinatorial library method is used, coupled with iterative screening. The performance of the agent is quantitatively validated with cell-based fluorescent assays. In the case of monitoring the metabolism of primary glioblastoma cell, by RIMChip, cells were sorting according to their expression levels of oncoprotein, or were treated with different kinds of drugs to study the metabolic heterogeneity of cancer cells or metabolic response of glioblastoma cells to drug treatments, respectively.
Resumo:
Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.
Resumo:
Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.
In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.