972 resultados para One-inclusion mistake bounds
Resumo:
Matrix metalloproteinases expression is used as biomarker for various cancers and associated malignancies. Since these proteinases can cleave many intracellular proteins, overexpression tends to be toxic; hence, a challenge to purify them. To overcome these limitations, we designed a protocol where full length pro-MMP2 enzyme was overexpressed in E. coli as inclusion bodies and purified using 6xHis affinity chromatography under denaturing conditions. In one step, the enzyme was purified and refolded directly on the affinity matrix under redox conditions to obtain a bioactive protein. The pro-MMP2 protein was characterized by mass spectrometry, CD spectroscopy, zymography and activity analysis using a simple in-house developed `form invariant' assay, which reports the total MMP2 activity independent of its various forms. The methodology yielded higher yields of bioactive protein compared to other strategies reported till date, and we anticipate that using the protocol, other toxic proteins can also be overexpressed and purified from E. coli and subsequently refolded into active form using a one step renaturation protocol.
Resumo:
Experimental and theoretical charge density analyses on 2,2-dibromo-2,3-dihydroinden-1-one have been carried out to quantify the topological features of a short CBr....O halogen bond with nearly linear geometry (2.922 angstrom, angle CBr....O = 172.7 degrees) and to assess the strength of the interactions using the topological features of the electron density. The electrostatic potential map indicates the presence of the s-hole on bromine, while the interaction energy is comparable to that of a moderate OH....O hydrogen bond. In addition, the energetic contribution of CH.....Br interaction is demonstrated to be on par with that of the CBr....O halogen bond in stabilizing the crystal structure.
Resumo:
The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).
Resumo:
A number of functionalized beta-amino and gamma-amino sulfides and selenides have been synthesized involving a one-pot process of ring opening of cyclic sulfamidates with `in situ' generated thiolate and selenoate species from diaryl disulfides and diphenyl diselenide using rongalite. A mild and efficient method has been developed for the synthesis of cysteines from serine.
Resumo:
The use of Projection Reconstruction (PR) to obtain two-dimensional (2D) spectra from one-dimensional (1D) data in the solid state is illustrated. The method exploits multiple 1D spectra obtained using magic angle spinning and off-magic angle spinning. The spectra recorded under the influence of scaled heteronuclear scalar and dipolar couplings in the presence of homonuclear dipolar decoupling sequences have been used to reconstruct J/D Resolved 2D-NMR spectra. The use of just two 1D spectra is observed sufficient to reconstruct a J-resolved 2D-spectrum while a Separated Local Field (SLF) 2D-NMR spectrum could be obtained from three 1D spectra. The experimental techniques for recording the 10 spectra and procedure of reconstruction are discussed and the reconstructed results are compared with 20 experiments recorded in traditional methods. The application of the technique has been made to a solid polycrystalline sample and to a uniaxially oriented liquid crystal. Implementation of PR-NMR in solid state provides high-resolution spectra as well as leads to significant reduction in experimental time. The experiments are relatively simple and are devoid of several technical complications involved in performing the 2D experiments.
Resumo:
We address the problem of two-dimensional (2-D) phase retrieval from magnitude of the Fourier spectrum. We consider 2-D signals that are characterized by first-order difference equations, which have a parametric representation in the Fourier domain. We show that, under appropriate stability conditions, such signals can be reconstructed uniquely from the Fourier transform magnitude. We formulate the phase retrieval problem as one of computing the parameters that uniquely determine the signal. We show that the problem can be solved by employing the annihilating filter method, particularly for the case when the parameters are distinct. For the more general case of the repeating parameters, the annihilating filter method is not applicable. We circumvent the problem by employing the algebraically coupled matrix pencil (ACMP) method. In the noiseless measurement setup, exact phase retrieval is possible. We also establish a link between the proposed analysis and 2-D cepstrum. In the noisy case, we derive Cramer-Rao lower bounds (CRLBs) on the estimates of the parameters and present Monte Carlo performance analysis as a function of the noise level. Comparisons with state-of-the-art techniques in terms of signal reconstruction accuracy show that the proposed technique outperforms the Fienup and relaxed averaged alternating reflections (RAAR) algorithms in the presence of noise.
Resumo:
An axis-parallel b-dimensional box is a Cartesian product R-1 x R-2 x ... x R-b where R-i is a closed interval of the form a(i),b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension b, such that G is representable as the intersection graph of boxes in b-dimensional space. Although boxicity was introduced in 1969 and studied extensively, there are no significant results on lower bounds for boxicity. In this paper, we develop two general methods for deriving lower bounds. Applying these methods we give several results, some of which are listed below: 1. The boxicity of a graph on n vertices with no universal vertices and minimum degree delta is at least n/2(n-delta-1). 2. Consider the g(n,p) model of random graphs. Let p <= 1 - 40logn/n(2.) Then with high `` probability, box(G) = Omega(np(1 - p)). On setting p = 1/2 we immediately infer that almost all graphs have boxicity Omega(n). Another consequence of this result is as follows: For any positive constant c < 1, almost all graphs on n vertices and m <= c((n)(2)) edges have boxicity Omega(m/n). 3. Let G be a connected k-regular graph on n vertices. Let lambda be the second largest eigenvalue in absolute value of the adjacency matrix of G. Then, the boxicity of G is a least (kappa(2)/lambda(2)/log(1+kappa(2)/lambda(2))) (n-kappa-1/2n). 4. For any positive constant c 1, almost all balanced bipartite graphs on 2n vertices and m <= cn(2) edges have boxicity Omega(m/n).
Resumo:
We update the constraints on two-Higgs-doublet models (2HDMs) focusing on the parameter space relevant to explain the present muon g - 2 anomaly, Delta alpha(mu), in four different types of models, type I, II, ``lepton specific'' (or X) and ``flipped'' (or Y). We show that the strong constraints provided by the electroweak precision data on the mass of the pseudoscalar Higgs, whose contribution may account for Delta alpha(mu), are evaded in regions where the charged scalar is degenerate with the heavy neutral one and the mixing angles alpha and beta satisfy the Standard Model limit beta - alpha approximate to pi/2. We combine theoretical constraints from vacuum stability and perturbativity with direct and indirect bounds arising from collider and B physics. Possible future constraints from the electron g - 2 are also considered. If the 126 GeV resonance discovered at the LHC is interpreted as the light CP-even Higgs boson of the 2HDM, we find that only models of type X can satisfy all the considered theoretical and experimental constraints.
Resumo:
In cells, N-10-formyltetrahydrofolate (N-10-fTHF) is required for formylation of eubacterial/organellar initiator tRNA and purine nucleotide biosynthesis. Biosynthesis of N-10-fTHF is catalyzed by 5,10-methylene-tetrahydrofolate dehydrogenase/cyclohydrolase (FolD) and/or 10-formyltetrahydrofolate synthetase (Fhs). All eubacteria possess FolD, but some possess both FolD and Fhs. However, the reasons for possessing Fhs in addition to FolD have remained unclear. We used Escherichia coli, which naturally lacks fhs, as our model. We show that in E. coli, the essential function of folD could be replaced by Clostridium perfringens fhs when it was provided on a medium-copy-number plasmid or integrated as a single-copy gene in the chromosome. The fhs-supported folD deletion (Delta folD) strains grow well in a complex medium. However, these strains require purines and glycine as supplements for growth in M9 minimal medium. The in vivo levels of N-10-fTHF in the Delta folD strain (supported by plasmid-borne fhs) were limiting despite the high capacity of the available Fhs to synthesize N-10-fTHF in vitro. Auxotrophy for purines could be alleviated by supplementing formate to the medium, and that for glycine was alleviated by engineering THF import into the cells. The Delta folD strain (harboring fhs on the chromosome) showed a high NADP(+)-to-NADPH ratio and hypersensitivity to trimethoprim. The presence of fhs in E. coli was disadvantageous for its aerobic growth. However, under hypoxia, E. coli strains harboring fhs outcompeted those lacking it. The computational analysis revealed a predominant natural occurrence of fhs in anaerobic and facultative anaerobic bacteria.
Resumo:
In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated with small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan et al, in which recovery from a single erasure is considered. By studying the generalized Hamming weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Turan graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from 2 erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.
Resumo:
Electrical resistance of both the electrodes of a lead-acid battery increases during discharge due to formation of lead sulfate, an insulator. Work of Metzendorf 1] shows that resistance increases sharply at about 65% conversion of active materials, and battery stops discharging once this critical conversion is reached. However, these aspects are not incorporated into existing mathematical models. Present work uses the results of Metzendorf 1], and develops a model that includes the effect of variable resistance. Further, it uses a reasonable expression to account for the decrease in active area during discharge instead of the empirical equations of previous work. The model's predictions are compared with observations of Cugnet et al. 2]. The model is as successful as the non-mechanistic models existing in literature. Inclusion of variation in resistance of electrodes in the model is important if one of the electrodes is a limiting reactant. If active materials are stoichiometrically balanced, resistance of electrodes can be very large at the end of discharge but has only a minor effect on charging of batteries. The model points to the significance of electrical conductivity of electrodes in the charging of deep discharged batteries. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In a complete bipartite graph with vertex sets of cardinalities n and n', assign random weights from exponential distribution with mean 1, independently to each edge. We show that, as n -> infinity, with n' = n/alpha] for any fixed alpha > 1, the minimum weight of many-to-one matchings converges to a constant (depending on alpha). Many-to-one matching arises as an optimization step in an algorithm for genome sequencing and as a measure of distance between finite sets. We prove that a belief propagation (BP) algorithm converges asymptotically to the optimal solution. We use the objective method of Aldous to prove our results. We build on previous works on minimum weight matching and minimum weight edge cover problems to extend the objective method and to further the applicability of belief propagation to random combinatorial optimization problems.
Resumo:
India needs to significantly increase its electricity consumption levels, in a sustainable manner, if it has to ensure rapid economic development, a goal that remains the most potent tool for delivering adaptation capacity to its poor who will suffer the worst consequences of climate change. Resource/supply constraints faced by conventional energy sources, techno-economic constraints faced by renewable energy sources, and the bounds imposed by climate change on fossil fuel use are likely to undermine India's quest for having a robust electricity system that can effectively contribute to achieving accelerated, sustainable and inclusive economic growth. One possible way out could be transitioning into a sustainable electricity system, which is a trade-off solution having taken into account the economic, social and environmental concerns. As a first step toward understanding this transition, we contribute an indicator based hierarchical multidimensional framework as an analytical tool for sustainability assessment of electricity systems, and validate it for India's national electricity system. We evaluate Indian electricity system using this framework by comparing it with a hypothetical benchmark sustainable electrical system, which was created using best indicator values realized across national electricity systems in the world. This framework, we believe, can be used to examine the social, economic and environmental implications of the current Indian electricity system as well as setting targets for future development. The analysis with the indicator framework provides a deeper understanding of the system, identify and quantify the prevailing sustainability gaps and generate specific targets for interventions. We use this framework to compute national electricity system sustainability index (NESSI) for India. (C) 2014 Elsevier Ltd. All rights reserved.
Quick, Decentralized, Energy-Efficient One-Shot Max Function Computation Using Timer-Based Selection
Resumo:
In several wireless sensor networks, it is of interest to determine the maximum of the sensor readings and identify the sensor responsible for it. We propose a novel, decentralized, scalable, energy-efficient, timer-based, one-shot max function computation (TMC) algorithm. In it, the sensor nodes do not transmit their readings in a centrally pre-defined sequence. Instead, the nodes are grouped into clusters, and computation occurs over two contention stages. First, the nodes in each cluster contend with each other using the timer scheme to transmit their reading to their cluster-heads. Thereafter, the cluster-heads use the timer scheme to transmit the highest sensor reading in their cluster to the fusion node. One new challenge is that the use of the timer scheme leads to collisions, which can make the algorithm fail. We optimize the algorithm to minimize the average time required to determine the maximum subject to a constraint on the probability that it fails to find the maximum. TMC significantly lowers average function computation time, average number of transmissions, and average energy consumption compared to approaches proposed in the literature.
Resumo:
We analyse the hVV (V = W, Z) vertex in a model independent way using Vh production. To that end, we consider possible corrections to the Standard Model Higgs Lagrangian, in the form of higher dimensional operators which parametrise the effects of new physics. In our analysis, we pay special attention to linear observables that can be used to probe CP violation in the same. By considering the associated production of a Higgs boson with a vector boson (W or Z), we use jet substructure methods to define angular observables which are sensitive to new physics effects, including an asymmetry which is linearly sensitive to the presence of CP odd effects. We demonstrate how to use these observables to place bounds on the presence of higher dimensional operators, and quantify these statements using a log likelihood analysis. Our approach allows one to probe separately the hZZ and hWW vertices, involving arbitrary combinations of BSM operators, at the Large Hadron Collider.