962 resultados para Homogeneous Distributions
Resumo:
Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.
Resumo:
Homogeneous DNA hybridization assay based on the luminescence resonance energy transfer (LRET) from a new luminescence terbium chelate, N,N,N-1,N-1-[2,6-bis(3'-aminomethyl-1'-pyrazolyl)-4-phenylpyridine]tetrakis(acetic acid) (BPTA)-Tb3+ (lambda(ex) = 325 nm and lambda(em) = 545 nm) to an organic dye, Cy3 (A,. = 548 nm and A,. = 565 nm), has been developed. In the system, two DNA probes whose sequences are complementary to the two different consecutive sequences of a target DNA are used; one of the probes is labeled with the Tb3+ chelate at the T-end, and the other is with Cy3 at the 5'-end. Labeling of the Tb3+ chelate is accomplished via the linkage of a biotin-labeled DNA probe with the Tb3+ chelate-labeled streptavidin. Strong sensitized emission of Cy3 was observed upon excitation of the Tb3+ chelate at 325 run, when the two probe DNAs were hybridized with the target DNA. The sensitivity of the assay was very high compared with those of the previous homogeneous-format assays using the conventional organic dyes; the detection limit of the present assay is about 30 pM of the target DNA strand.
Resumo:
There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of ε-optimal solutions, and (ii) the maximum distance of ε-optimal solutions to the boundary of the cone of the HSD variables. This norm is also useful in developing a stopping-rule theory for HSD-based interior-point methods such as SeDuMi. Under mild assumptions, we show that a standard stopping rule implicitly involves the sum of the sizes of the ε-optimal primal and dual solutions, as well as the size of the initial primal and dual infeasibility residuals. This theory suggests possible criteria for developing starting points for the homogeneous self-dual model that might improve the resulting solution time in practice
Resumo:
A sensitive homogenous time-resolved fluoroimmunoassay (TR-FIA) method for bensulfuron-methyl (BSM) based on fluorescence resonance energy transfer (FRET) from a Tb3+ fluorescent chelate with N,N,N',N'-[2,6-bis(3'-aminomethyl-1'-pyrazoly)-4-phenylpyridine] tetrakis(acetic acid) (BPTA-Tb3+) to organic dye, Cy3 or Cy3.5 has been developed. New method combined the use of BPTA-Tb3+ labeled streptavidin, Cy3 or Cy3.5 labeled anti-BSM monoclonal antibody and biotinylated BSM-BSA conjugate (BSA is bovine serum albumin) for competitive-type immunoassay. After BPTA-Tb3+ labeled streptavidin was reacted with a competitive immune reaction solution containing biotinylated BSM-BSA, BSM sample and Cy3 or Cy3.5 labeled anti-BSM monoclonal antibody, the sensitized and long-lived emission of Cy3 or Cy3.5 derived from FRET was measured, and thus the concentration of BSM in sample was calculated. The present method has the advantages of rapidity, simplicity and high sensitivity since the B/F (bound reagent/free reagent) separation steps and the solid-phase carrier are not necessary. The method gives the detection limit of 2.10 ng ml(-1). The coefficient variations of the method are less than 1.5% and the recoveries are in the range of 95-105% for BSM water sample measurement. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Essery, RLH & JW, Pomeroy, (2004). Vegetation and topographic control of wind-blown snow distributions in distributed and aggregated simulations. Journal of Hydrometeorology, 5, 735-744.
Resumo:
Plakhov, A.Y.; Torres, D., (2005) 'Newton's aerodynamic problem in media of chaotically moving particles', Sbornik: Mathematics 196(6) pp.885-933 RAE2008
Resumo:
Brian Huntley, Rhys E. Green, Yvonne C. Collingham, Jane K. Hill, Stephen G. Willis , Patrick J. Bartlein, Wolfgang Cramer, Ward J. M. Hagemeijer and Christopher J. Thomas (2004). The performance of models relating species geographical distributions to climate is independent of trophic level. Ecology Letters, 7(5), 417-426. Sponsorship: NERC (awards: GR9/3016, GR9/04270, GR3/12542, NER/F/S/2000/00166) / RSPB RAE2008
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
Fast forward error correction codes are becoming an important component in bulk content delivery. They fit in naturally with multicast scenarios as a way to deal with losses and are now seeing use in peer to peer networks as a basis for distributing load. In particular, new irregular sparse parity check codes have been developed with provable average linear time performance, a significant improvement over previous codes. In this paper, we present a new heuristic for generating codes with similar performance based on observing a server with an oracle for client state. This heuristic is easy to implement and provides further intuition into the need for an irregular heavy tailed distribution.
Resumo:
A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and based on predictions of the Markov model. The evolution of the skin color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and re-sampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. Quantitative evaluation of the method was conducted on labeled ground-truth video sequences taken from popular movies.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.