948 resultados para Teleonomic Entropy
Resumo:
We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 [6] to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m−n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W . The previously best known result, also of Dodis et al., [6] extracted up to (2m − n)/3 bits (depending on the same parameters).
Resumo:
The problem of discovering frequent poly-regions (i.e. regions of high occurrence of a set of items or patterns of a given alphabet) in a sequence is studied, and three efficient approaches are proposed to solve it. The first one is entropy-based and applies a recursive segmentation technique that produces a set of candidate segments which may potentially lead to a poly-region. The key idea of the second approach is the use of a set of sliding windows over the sequence. Each sliding window covers a sequence segment and keeps a set of statistics that mainly include the number of occurrences of each item or pattern in that segment. Combining these statistics efficiently yields the complete set of poly-regions in the given sequence. The third approach applies a technique based on the majority vote, achieving linear running time with a minimal number of false negatives. After identifying the poly-regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a poly-region). An efficient algorithm for mining frequent arrangements of intervals is applied to the converted sequence to discover frequently occurring arrangements of poly-regions in different parts of DNA, including coding regions. The proposed algorithms are tested on various DNA sequences producing results of significant biological meaning.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Resumo:
The problem of discovering frequent arrangements of regions of high occurrence of one or more items of a given alphabet in a sequence is studied, and two efficient approaches are proposed to solve it. The first approach is entropy-based and uses an existing recursive segmentation technique to split the input sequence into a set of homogeneous segments. The key idea of the second approach is to use a set of sliding windows over the sequence. Each sliding window keeps a set of statistics of a sequence segment that mainly includes the number of occurrences of each item in that segment. Combining these statistics efficiently yields the complete set of regions of high occurrence of the items of the given alphabet. After identifying these regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a region). An efficient algorithm for mining frequent arrangements of temporal intervals on a single sequence is applied on the converted sequence to discover frequently occurring arrangements of these regions. The proposed algorithms are tested on various DNA sequences producing results with significant biological meaning.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
Very Long Baseline Interferometry (VLBI) polarisation observations of the relativistic jets from Active Galactic Nuclei (AGN) allow the magnetic field environment around the jet to be probed. In particular, multi-wavelength observations of AGN jets allow the creation of Faraday rotation measure maps which can be used to gain an insight into the magnetic field component of the jet along the line of sight. Recent polarisation and Faraday rotation measure maps of many AGN show possible evidence for the presence of helical magnetic fields. The detection of such evidence is highly dependent both on the resolution of the images and the quality of the error analysis and statistics used in the detection. This thesis focuses on the development of new methods for high resolution radio astronomy imaging in both of these areas. An implementation of the Maximum Entropy Method (MEM) suitable for multi-wavelength VLBI polarisation observations is presented and the advantage in resolution it possesses over the CLEAN algorithm is discussed and demonstrated using Monte Carlo simulations. This new polarisation MEM code has been applied to multi-wavelength imaging of the Active Galactic Nuclei 0716+714, Mrk 501 and 1633+382, in each case providing improved polarisation imaging compared to the case of deconvolution using the standard CLEAN algorithm. The first MEM-based fractional polarisation and Faraday-rotation VLBI images are presented, using these sources as examples. Recent detections of gradients in Faraday rotation measure are presented, including an observation of a reversal in the direction of a gradient further along a jet. Simulated observations confirming the observability of such a phenomenon are conducted, and possible explanations for a reversal in the direction of the Faraday rotation measure gradient are discussed. These results were originally published in Mahmud et al. (2013). Finally, a new error model for the CLEAN algorithm is developed which takes into account correlation between neighbouring pixels. Comparison of error maps calculated using this new model and Monte Carlo maps show striking similarities when the sources considered are well resolved, indicating that the method is correctly reproducing at least some component of the overall uncertainty in the images. The calculation of many useful quantities using this model is demonstrated and the advantages it poses over traditional single pixel calculations is illustrated. The limitations of the model as revealed by Monte Carlo simulations are also discussed; unfortunately, the error model does not work well when applied to compact regions of emission.
Resumo:
We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.
Resumo:
The Pulitzer Prize in Music, established in 1943, is one of America's most prestigious awards. It has been awarded to fifty-three composers for a "distinguished musical composition of significant dimension by an American that has had its first performance in the United States during the year." Composers who have won the Pulitzer Prize are considered to be at the pinnacle of their creativity and have provided the musical world with classical music compositions worthy of future notice. By tracing the history of Pulitzer Prize-winning composers and their compositions, researchers and musicians enhance their understanding of the historical evolution of American music, and its impact on American culture. Although the clarinet music of some of these composers is rarely performed today, their names will be forever linked to the Pulitzer, and because of that, their compositions will enjoy a certain sense of immortality. Of the fifty-four composers who have won the award, forty-seven have written for the clarinet in a solo or chamber music setting (five or less instruments). Just as each Pulitzer Prize-winning composition is a snapshot of the state of American music at that time, these works trace the history of American clarinet musical development, and therefore, they are valuable additions to the clarinet repertoire and worthy of performance. This dissertation project consists of two recitals featuring the solo and chamber clarinet music of sixteen Pulitzer Prize-winning composers, extended program notes containing information on each composer's life, their music, the Pulitzer Prize-winning composition and the recital selection, and a complete list of all Pulitzer Prize-winning composers and their solo and chamber clarinet music. Featured Composers Dominick Argento, To Be Sung Upon the Water Leslie Bassett, Soliloquies William Bolcom, Little Suite of Four Dances Aaron Copland, As it Fell Upon a Day John Corigliano, Soliloquy Norman Dello Joio, Concertante Morton Gould, Benny's Gig Charles Ives, Largo Douglas Moore, Quintet for Clarinet and Strings George Perle, Three Sonatas Quincy Porter, Quintet for Clarinet and Strings Mel Powell, Clarinade Shulamit Ran, Private Game Joseph Schwantner, Entropy Leo Sowerby, Sonata Ernst Toch, Adagio elegiaco
Resumo:
Here we show that the configuration of a slender enclosure can be optimized such that the radiation heating of a stream of solid is performed with minimal fuel consumption at the global level. The solid moves longitudinally at constant rate through the enclosure. The enclosure is heated by gas burners distributed arbitrarily, in a manner that is to be determined. The total contact area for heat transfer between the hot enclosure and the cold solid is fixed. We find that minimal global fuel consumption is achieved when the longitudinal distribution of heaters is nonuniform, with more heaters near the exit than the entrance. The reduction in fuel consumption relative to when the heaters are distributed uniformly is of order 10%. Tapering the plan view (the floor) of the heating area yields an additional reduction in overall fuel consumption. The best shape is when the floor area is a slender triangle on which the cold solid enters by crossing the base. These architectural features recommend the proposal to organize the flow of the solid as a dendritic design, which enters as several branches, and exits as a single hot stream of prescribed temperature. The thermodynamics of heating is presented in modern terms in the Sec. (exergy destruction, entropy generation). The contribution is that to optimize "thermodynamically" is the same as reducing the consumption of fuel. © 2010 American Institute of Physics.
Resumo:
We developed a high-throughput yeast-based assay to screen for chemical inhibitors of Ca(2+)/calmodulin-dependent kinase pathways. After screening two small libraries, we identified the novel antagonist 125-C9, a substituted ethyleneamine. In vitro kinase assays confirmed that 125-C9 inhibited several calmodulin-dependent kinases (CaMKs) competitively with Ca(2+)/calmodulin (Ca(2+)/CaM). This suggested that 125-C9 acted as an antagonist for Ca(2+)/CaM rather than for CaMKs. We confirmed this hypothesis by showing that 125-C9 binds directly to Ca(2+)/CaM using isothermal titration calorimetry. We further characterized binding of 125-C9 to Ca(2+)/CaM and compared its properties with those of two well-studied CaM antagonists: trifluoperazine (TFP) and W-13. Isothermal titration calorimetry revealed that binding of 125-C9 to CaM is absolutely Ca(2+)-dependent, likely occurs with a stoichiometry of five 125-C9 molecules to one CaM molecule, and involves an exchange of two protons at pH 7.0. Binding of 125-C9 is driven overall by entropy and appears to be competitive with TFP and W-13, which is consistent with occupation of similar binding sites. To test the effects of 125-C9 in living cells, we evaluated mitogen-stimulated re-entry of quiescent cells into proliferation and found similar, although slightly better, levels of inhibition by 125-C9 than by TFP and W-13. Our results not only define a novel Ca(2+)/CaM inhibitor but also reveal that chemically unique CaM antagonists can bind CaM by distinct mechanisms but similarly inhibit cellular actions of CaM.
Resumo:
A Fermi gas of atoms with resonant interactions is predicted to obey universal hydrodynamics, in which the shear viscosity and other transport coefficients are universal functions of the density and temperature. At low temperatures, the viscosity has a universal quantum scale ħ n, where n is the density and ħ is Planck's constant h divided by 2π, whereas at high temperatures the natural scale is p(T)(3)/ħ(2), where p(T) is the thermal momentum. We used breathing mode damping to measure the shear viscosity at low temperature. At high temperature T, we used anisotropic expansion of the cloud to find the viscosity, which exhibits precise T(3/2) scaling. In both experiments, universal hydrodynamic equations including friction and heating were used to extract the viscosity. We estimate the ratio of the shear viscosity to the entropy density and compare it with that of a perfect fluid.
Resumo:
We study the problem of supervised linear dimensionality reduction, taking an information-theoretic viewpoint. The linear projection matrix is designed by maximizing the mutual information between the projected signal and the class label. By harnessing a recent theoretical result on the gradient of mutual information, the above optimization problem can be solved directly using gradient descent, without requiring simplification of the objective function. Theoretical analysis and empirical comparison are made between the proposed method and two closely related methods, and comparisons are also made with a method in which Rényi entropy is used to define the mutual information (in this case the gradient may be computed simply, under a special parameter setting). Relative to these alternative approaches, the proposed method achieves promising results on real datasets. Copyright 2012 by the author(s)/owner(s).
Resumo:
X-ray crystallography is the predominant method for obtaining atomic-scale information about biological macromolecules. Despite the success of the technique, obtaining well diffracting crystals still critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. To help determine relationships between macromolecular properties and their crystallization propensity, we have trained statistical models on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.
Resumo:
Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally- derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.
Resumo:
Locked nucleic acids (LNA), conformationally restricted nucleotide analogues, are known to enhance pairing stability and selectivity toward complementary strands. With the aim to contribute to a better understanding of the origin of these effects, the structure, thermal stability, hybridization thermodynamics, and base-pair dynamics of a full-LNA:DNA heteroduplex and of its isosequential DNA:DNA homoduplex were monitored and compared. CD measurements highlight differences in the duplex structures: the homoduplex and heteroduplex present B-type and A-type helical conformations, respectively. The pairing of the hybrid duplex is characterized, at all temperatures monitored (between 15 and 37 degrees C), by a larger stability constant but a less favorable enthalpic term. A major contribution to this thermodynamic profile emanates from the presence of a hairpin structure in the LNA single strand which contributes favorably to the entropy of interaction but leads to an enthalpy penalty upon duplex formation. The base-pair opening dynamics of both systems was monitored by NMR spectroscopy via imino protons exchange measurements. The measurements highlight that hybrid G-C base-pairs present a longer base-pair lifetime and higher stability than natural G-C base-pairs, but that an LNA substitution in an A-T base-pair does not have a favorable effect on the stability. The thermodynamic and dynamic data confirm a more favorable stacking of the bases in the hybrid duplex. This study emphasizes the complementarities between dynamic and thermodynamical studies for the elucidation of the relevant factors in binding events.