238 resultados para Set functions.
Resumo:
Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.
Resumo:
We propose a new set of input voltage equations (IVEs) for independent double-gate MOSFET by solving the governing bipolar Poisson equation (PE) rigorously. The proposed IVEs, which involve the Legendre's incomplete elliptic integral of the first kind and Jacobian elliptic functions and are valid from accumulation to inversion regimes, are shown to have good agreement with the numerical solution of the same PE for all bias conditions.
Resumo:
In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable training set that is noise free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper, we analyze the noise-tolerance properties of risk minimization (under different loss functions). We show that risk minimization under 0-1 loss function has impressive noise-tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude this paper with some discussion on the implications of these theoretical results.
Resumo:
Restriction-modification (R-M) systems are ubiquitous and are often considered primitive immune systems in bacteria. Their diversity and prevalence across the prokaryotic kingdom are an indication of their success as a defense mechanism against invading genomes. However, their cellular defense function does not adequately explain the basis for their immaculate specificity in sequence recognition and nonuniform distribution, ranging from none to too many, in diverse species. The present review deals with new developments which provide insights into the roles of these enzymes in other aspects of cellular function. In this review, emphasis is placed on novel hypotheses and various findings that have not yet been dealt with in a critical review. Emerging studies indicate their role in various cellular processes other than host defense, virulence, and even controlling the rate of evolution of the organism. We also discuss how R-M systems could have successfully evolved and be involved in additional cellular portfolios, thereby increasing the relative fitness of their hosts in the population.
Resumo:
We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
Bactericidal permeability increasing protein (BPI), a 55-60kDa protein, first reported in 1975, has gone a long way as a protein with multifunctional roles. Its classical role in neutralizing endotoxin (LPS) raised high hopes among septic shock patients. Today, BPI is not just a LPS-neutralizing protein, but a protein with diverse functions. These functions can be as varied as inhibition of endothelial cell growth and inhibition of dendritic cell maturation, or as an anti-angiogenic, chemoattractant or opsonization agent. Though the literature available is extremely limited, it is fascinating to look into how BPI is gaining major importance as a signalling molecule. In this review, we briefly summarize the recent research focused on the multiple roles of BPI and its use as a therapeutic.
Resumo:
The multiple short introns in Schizosaccharomyces pombe genes with degenerate cis sequences and atypically positioned polypyrimidine tracts make an interesting model to investigate canonical and alternative roles for conserved splicing factors. Here we report functions and interactions of the S. pombe slu7(+) (spslu7(+)) gene product, known from Saccharomyces cerevisiae and human in vitro reactions to assemble into spliceosomes after the first catalytic reaction and to dictate 3' splice site choice during the second reaction. By using a missense mutant of this essential S. pombe factor, we detected a range of global splicing derangements that were validated in assays for the splicing status of diverse candidate introns. We ascribe widespread, intron-specific SpSlu7 functions and have deduced several features, including the branch nucleotide-to-3' splice site distance, intron length, and the impact of its A/U content at the 5' end on the intron's dependence on SpSlu7. The data imply dynamic substrate-splicing factor relationships in multiintron transcripts. Interestingly, the unexpected early splicing arrest in spslu7-2 revealed a role before catalysis. We detected a salt-stable association with U5 snRNP and observed genetic interactions with spprp1(+), a homolog of human U5-102k factor. These observations together point to an altered recruitment and dependence on SpSlu7, suggesting its role in facilitating transitions that promote catalysis, and highlight the diversity in spliceosome assembly.
Resumo:
The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
Resumo:
Conceptual design involves identification of required functions of the intended design, generation of concepts to fulfill these functions, and evaluation of these concepts to select the most promising ones for further development. The focus of this paper is the second phase-concept generation, in which a challenge has been to develop possible physical embodiments to offer designers for exploration and evaluation. This paper investigates the issue of how to transform and thus synthesise possible generic physical embodiments and reports an implemented method that could automatically generate these embodiments. In this paper, a method is proposed to transform a variety of possible initial solutions to a design problem into a set of physical solutions that are described in terms of abstraction of mechanical movements. The underlying principle of this method is to make it possible to link common attributes between a specific abstract representation and its possible physical objects. For a given input, this method can produce a set of concepts in terms of their generic physical embodiments. The method can be used to support designers to start with a given input-output function and systematically search for physical objects for design consideration in terms of simplified functional, spatial, and mechanical movement requirements.
Resumo:
We consider four-dimensional CFTs which admit a large-N expansion, and whose spectrum contains states whose conformal dimensions do not scale with N. We explicitly reorganise the partition function obtained by exponentiating the one-particle partition function of these states into a heat kernel form for the dual string spectrum on AdS(5). On very general grounds, the heat kernel answer can be expressed in terms of a convolution of the one-particle partition function of the light states in the four-dimensional CFT. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
A series expansion for Heckman-Opdam hypergeometric functions phi(lambda) is obtained for all lambda is an element of alpha(C)*. As a consequence, estimates for phi(lambda) away from the walls of a Weyl chamber are established. We also characterize the bounded hypergeometric functions and thus prove an analogue of the celebrated theorem of Helgason and Johnson on the bounded spherical functions on a Riemannian symmetric space of the noncompact type. The L-P-theory for the hypergeometric Fourier transform is developed for 0 < p < 2. In particular, an inversion formula is proved when 1 <= p < 2. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
The basic requirement for an autopilot is fast response and minimum steady state error for better guidance performance. The highly nonlinear nature of the missile dynamics due to the severe kinematic and inertial coupling of the missile airframe as well as the aerodynamics has been a challenge for an autopilot that is required to have satisfactory performance for all flight conditions in probable engagements. Dynamic inversion is very popular nonlinear controller for this kind of scenario. But the drawback of this controller is that it is sensitive to parameter perturbation. To overcome this problem, neural network has been used to capture the parameter uncertainty on line. The choice of basis function plays the major role in capturing the unknown dynamics. Here in this paper, many basis function has been studied for approximation of unknown dynamics. Cosine basis function has yield the best response compared to any other basis function for capturing the unknown dynamics. Neural network with Cosine basis function has improved the autopilot performance as well as robustness compared to Dynamic inversion without Neural network.