35 resultados para Number of samples


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thiobacillus ferrooxidans MAL4-1, an isolate from Malanjkhand copper mines, India, was adapted to grow in the presence of high concentration (30 gL(-1)) of Cu2+, resulting in a 15-fold increase in its tolerance to Cu2+. While wild-type T. ferrooxidans MAL4-1 contained multiple plasmids, cultures adapted to Cu2+ concentrations of 20 gL(-1) or more showed a drastic reduction in the copy number of the plasmids. The reduction for three of the plasmids was estimated to be over 50-fold. Examination of the plasmid profiles of the strains adapted to high concentration of SO42- anion (as Na2SO4 or ZnSO4) indicated that the reduction in plasmid copy number is not owing to SO42- anion, but is specific for Cu2+. The effect of mercury on the plasmids was similar to that of copper. Deadaptation of the Cu2+- Or Hg2+-adapted T. ferrooxidans resulted in restoration of the plasmids to the original level within the first passage. The fact that the plasmid copy number, in general, is drastically reduced in Cu2+-adapted T. ferrooxidans suggests that resistance to copper is chromosome mediated. This is the first report of a selective negative influence of copper ions on the copy number of plasmids in T. ferrooxidans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Suspensions of testicular germ cells from six species of mammals were prepared and stained for the DNA content with a fluorochrome (ethidium bromide) adopting a common technique and subjected to DNA flow cytometry. While uniform staining of the germ cells of the mouse, hamster, rat and monkey could be obtained by treating with 0.5% pepsin for 60 min followed by staining with ethidium bromide for 30 min, that of the guinea pig and rabbit required for optimal staining pepsinization for 90 min and treatment with ethidium bromide for 60 min. The procedure adopted here provided a uniform recovery of over 80% of germ cells with each one of the species tested and the cell population distributed itself according to the DNA content (expressed as C values) into 5 major classes-spermatogonia (2C), cells in S-phase, primary spermatocytes (4C), round spermatids (1C), and elongating/elongated spermatids (HC). Comparison of the DNA distribution pattern of the germ cell populations between species revealed little variation in the relative quantities of cells with 2C (8-11%), S-phase (6-9%), and 4C (6-9%) amount of DNA. Though the spermatid cell populations exhibited variations (1C:31-46%, HCI:7-20% and and HC2:11-25%) they represented the bulk of germ cells (70-80%). The overall conversion of 2C to 1C (1C:2C ratio) and meiotic transformation of 4C cells to IC (1C:4C ratio) kinetics were relatively constant between the species studied. The present study clearly demonstrates that DNA flow cytometry can be adopted with ease and assurance to quantify germ cell transformation and as such spermatogenesis by analysing a large number of samples with consistency both within and across the species barrier. Any variation from the norms in germ cell proportions observed following treatment, for e.g. hormonal stimulation or deprivation can then be ascribed due to a specific effect of the hormone/drug on single/multiple steps in germ cell transformation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let G be an undirected graph with a positive real weight on each edge. It is shown that the number of minimum-weight cycles of G is bounded above by a polynomial in the number of edges of G. A similar bound holds if we wish to count the number of cycles with weight at most a constant multiple of the minimum weight of a cycle of G.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a system comprising a finite number of nodes, with infinite packet buffers, that use unslotted ALOHA with Code Division Multiple Access (CDMA) to share a channel for transmitting packetised data. We propose a simple model for packet transmission and retransmission at each node, and show that saturation throughput in this model yields a sufficient condition for the stability of the packet buffers; we interpret this as the capacity of the access method. We calculate and compare the capacities of CDMA-ALOHA (with and without code sharing) and TDMA-ALOHA; we also consider carrier sensing and collision detection versions of these protocols. In each case, saturation throughput can be obtained via analysis pf a continuous time Markov chain. Our results show how saturation throughput degrades with code-sharing. Finally, we also present some simulation results for mean packet delay. Our work is motivated by optical CDMA in which "chips" can be optically generated, and hence the achievable chip rate can exceed the achievable TDMA bit rate which is limited by electronics. Code sharing may be useful in the optical CDMA context as it reduces the number of optical correlators at the receivers. Our throughput results help to quantify by how much the CDMA chip rate should exceed the TDMA bit rate so that CDMA-ALOHA yields better capacity than TDMA-ALOHA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction between the hepatitis C virus (HCV) envelope protein E2 and the host receptor CD81 is essential for HCV entry into target cells. The number of E2-CD81 complexes necessary for HCV entry has remained difficult to estimate experimentally. Using the recently developed cell culture systems that allow persistent HCV infection in vitro, the dependence of HCV entry and kinetics on CD81 expression has been measured. We reasoned that analysis of the latter experiments using a mathematical model of viral kinetics may yield estimates of the number of E2-CD81 complexes necessary for HCV entry. Here, we constructed a mathematical model of HCV viral kinetics in vitro, in which we accounted explicitly for the dependence of HCV entry on CD81 expression. Model predictions of viral kinetics are in quantitative agreement with experimental observations. Specifically, our model predicts triphasic viral kinetics in vitro, where the first phase is characterized by cell proliferation, the second by the infection of susceptible cells and the third by the growth of cells refractory to infection. By fitting model predictions to the above data, we were able to estimate the threshold number of E2-CD81 complexes necessary for HCV entry into human hepatoma-derived cells. We found that depending on the E2-CD81 binding affinity, between 1 and 13 E2-CD81 complexes are necessary for HCV entry. With this estimate, our model captured data from independent experiments that employed different HCV clones and cells with distinct CD81 expression levels, indicating that the estimate is robust. Our study thus quantifies the molecular requirements of HCV entry and suggests guidelines for intervention strategies that target the E2-CD81 interaction. Further, our model presents a framework for quantitative analyses of cell culture studies now extensively employed to investigate HCV infection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ab initio GW calculations are a standard method for computing the spectroscopic properties of many materials. The most computationally expensive part in conventional implementations of the method is the generation and summation over the large number of empty orbitals required to converge the electron self-energy. We propose a scheme to reduce the summation over empty states by the use of a modified static remainder approximation, which is simple to implement and yields accurate self-energies for both bulk and molecular systems requiring a small fraction of the typical number of empty orbitals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let I be an m-primary ideal of a Noetherian local ring (R, m) of positive dimension. The coefficient e(1)(I) of the Hilbert polynomial of an I-admissible filtration I is called the Chern number of I. A formula for the Chern number has been derived involving the Euler characteristic of subcomplexes of a Koszul complex. Specific formulas for the Chern number have been given in local rings of dimension at most two. These have been used to provide new and unified proofs of several results about e(1)(I).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rainbow connection number, rc(G), of a connected graph G is the minimum number of colors needed to color its edges so that every pair of vertices is connected by at least one path in which no two edges are colored the same (note that the coloring need not be proper). In this paper we study the rainbow connection number with respect to three important graph product operations (namely the Cartesian product, the lexicographic product and the strong product) and the operation of taking the power of a graph. In this direction, we show that if G is a graph obtained by applying any of the operations mentioned above on non-trivial graphs, then rc(G) a parts per thousand currency sign 2r(G) + c, where r(G) denotes the radius of G and . In general the rainbow connection number of a bridgeless graph can be as high as the square of its radius 1]. This is an attempt to identify some graph classes which have rainbow connection number very close to the obvious lower bound of diameter (and thus the radius). The bounds reported are tight up to additive constants. The proofs are constructive and hence yield polynomial time -factor approximation algorithms.