938 resultados para Maximum Degree Proximity algorithm (MAX-DPA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing compact routing schemes, e.g., Thorup and Zwick [SPAA 2001] and Chechik [PODC 2013], often have no means to tolerate failures, once the system has been setup and started. This paper presents, to our knowledge, the first self-healing compact routing scheme. Besides, our schemes are developed for low memory nodes, i.e., nodes need only O(log2 n) memory, and are thus, compact schemes.
We introduce two algorithms of independent interest: The first is CompactFT, a novel compact version (using only O(log n) local memory) of the self-healing algorithm Forgiving Tree of Hayes et al. [PODC 2008]. The second algorithm (CompactFTZ) combines CompactFT with Thorup-Zwick’s treebased compact routing scheme [SPAA 2001] to produce a fully compact self-healing routing scheme. In the self-healing model, the adversary deletes nodes one at a time with the affected nodes self-healing locally by adding few edges. CompactFT recovers from each attack in only O(1) time and ∆ messages, with only +3 degree increase and O(log∆) graph diameter increase, over any sequence of deletions (∆ is the initial maximum degree).
Additionally, CompactFTZ guarantees delivery of a packet sent from sender s as long as the receiver has not been deleted, with only an additional O(y log ∆) latency, where y is the number of nodes that have been deleted on the path between s and t. If t has been deleted, s gets informed and the packet removed from the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1983, Chvatal, Trotter and the two senior authors proved that for any Delta there exists a constant B such that, for any n, any 2-colouring of the edges of the complete graph K(N) with N >= Bn vertices yields a monochromatic copy of any graph H that has n vertices and maximum degree Delta. We prove that the complete graph may be replaced by a sparser graph G that has N vertices and O(N(2-1/Delta)log(1/Delta)N) edges, with N = [B`n] for some constant B` that depends only on Delta. Consequently, the so-called size-Ramsey number of any H with n vertices and maximum degree Delta is O(n(2-1/Delta)log(1/Delta)n) Our approach is based on random graphs; in fact, we show that the classical Erdos-Renyi random graph with the numerical parameters above satisfies a stronger partition property with high probability, namely, that any 2-colouring of its edges contains a monochromatic universal graph for the class of graphs on n vertices and maximum degree Delta. The main tool in our proof is the regularity method, adapted to a suitable sparse setting. The novel ingredient developed here is an embedding strategy that allows one to embed bounded degree graphs of linear order in certain pseudorandom graphs. Crucial to our proof is the fact that regularity is typically inherited at a scale that is much finer than the scale at which it is assumed. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Milk, fat, and protein yields of Holstein cows from the States of New York and California in the United States were used to estimate (co)variances among yields in the first three lactations, using an animal model and a derivative-free restricted maximum likelihood (REML) algorithm, and to verify if yields in different lactations are the same trait. The data were split in 20 samples, 10 from each state, with means of 5463 and 5543 cows per sample from California and New York. Mean heritability estimates for milk, fat, and protein yields for California data were, respectively, 0.34, 0.35, and 0.40 for first; 0.31, 0.33, and 0.39 for second; and 0.28, 0.31, and 0.37 for third lactations. For New York data, estimates were 0.35, 0.40, and 0.34 for first; 0.34, 0.44, and 0.38 for second; and 0.32, 0.43, and 0.38 for third lactations. Means of estimates of genetic correlations between first and second, first and third, and second and third lactations for California data were 0.86, 0.77, and 0.96 for milk; 0.89, 0.84, and 0.97 for fat; and 0.90, 0.84, and 0.97 for protein yields. Mean estimates for New York data were 0.87, 0.81, and 0.97 for milk; 0.91, 0.86, and 0.98 for fat; and 0.88, 0.82, and 0.98 for protein yields. Environmental correlations varied from 0.30 to 0.50 and were larger between second and third lactations. Phenotypic correlations were similar for both states and varied from 0.52 to 0.66 for milk, fat and protein yields. These estimates are consistent with previous estimates obtained with animal models. Yields in different lactations are not statistically the same trait but for selection programs such yields can be modelled as the same trait because of the high genetic correlations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove that asymptotically (as n -> infinity) almost all graphs with n vertices and C(d)n(2-1/2d) log(1/d) n edges are universal with respect to the family of all graphs with maximum degree bounded by d. Moreover, we provide an efficient deterministic embedding algorithm for finding copies of bounded degree graphs in graphs satisfying certain pseudorandom properties. We also prove a counterpart result for random bipartite graphs, where the threshold number of edges is even smaller but the embedding is randomized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The anisotropy of magnetic susceptibility documents the generation of tectonically produced fabrics in sediments that macroscopically show no evidence of this disruption. The fabric observed in initial accretion is largely produced by overprinting of the original sedimentary susceptibility anisotropy by an E-W horizontal tectonic shortening and vertical extension. The response of the sediments to stress during initial accretion is variable, particularly near the sediment surface, and appears to reflect the inhomogeneous distribution of strain rate in the overthrust sequence. The susceptibility anisotropy of sediments possessing scaly fabric is consistent with the strong orientation of Phyllosilicates seen in thin section, producing a Kmin normal to the scalyness. The slope sediments deposited on the accreted sequence are also affected by tectonic shortening. The accreted sequences at Sites 673 and 674 show a complex history of fabric modification, with previous tectonic fabrics overprinted by later fabric modifications, pointing to continued tectonic shortening during the accretion process. The form of the susceptibility anisotropy axes at Sites 673 and 674 is consistent with NESW shortening, probably reflected in the NW-SE surface expression of the out-of-sequence thrusts. The susceptibility anisotropy appears to document a downhole change in the trend of shortening from E to W at the surface to more NESW at depth, probably as a result of the obliquely trending basement ridge, the Tiburon Rise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Because there is scientific evidence that an appropriate intake of dietary fibre should be part of a healthy diet, given its importance in promoting health, the present study aimed to develop and validate an instrument to evaluate the knowledge of the general population about dietary fibres. Study design: The present study was a cross sectional study. Methods: The methodological study of psychometric validation was conducted with 6010 participants, residing in ten countries from 3 continents. The instrument is a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. For exploratory factor analysis (EFA) was chosen the analysis of the main components using varimax orthogonal rotation and eigenvalues greater than 1. In confirmatory factor analysis by structural equation modelling (SEM) was considered the covariance matrix and adopted the Maximum Likelihood Estimation algorithm for parameter estimation. Results: Exploratory factor analysis retained two factors. The first was called Dietary Fibre and Promotion of Health (DFPH) and included 7 questions that explained 33.94 % of total variance ( = 0.852). The second was named Sources of Dietary Fibre (SDF) and included 4 questions that explained 22.46% of total variance ( = 0.786). The model was tested by SEM giving a final solution with four questions in each factor. This model showed a very good fit in practically all the indexes considered, except for the ratio 2/df. The values of average variance extracted (0.458 and 0.483) demonstrate the existence of convergent validity; the results also prove the existence of discriminant validity of the factors (r2 = 0.028) and finally good internal consistency was confirmed by the values of composite reliability (0.854 and 0.787). Conclusions: This study allowed validating the KADF scale, increasing the degree of confidence in the information obtained through this instrument in this and in future studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new algorithm to compute the voxel-wise genetic contribution to brain fiber microstructure using diffusion tensor imaging (DTI) in a dataset of 25 monozygotic (MZ) twins and 25 dizygotic (DZ) twin pairs (100 subjects total). First, the structural and DT scans were linearly co-registered. Structural MR scans were nonlinearly mapped via a 3D fluid transformation to a geometrically centered mean template, and the deformation fields were applied to the DTI volumes. After tensor re-orientation to realign them to the anatomy, we computed several scalar and multivariate DT-derived measures including the geodesic anisotropy (GA), the tensor eigenvalues and the full diffusion tensors. A covariance-weighted distance was measured between twins in the Log-Euclidean framework [2], and used as input to a maximum-likelihood based algorithm to compute the contributions from genetics (A), common environmental factors (C) and unique environmental ones (E) to fiber architecture. Quanititative genetic studies can take advantage of the full information in the diffusion tensor, using covariance weighted distances and statistics on the tensor manifold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A k-cube (or ``a unit cube in k dimensions'') is defined as the Cartesian product R-1 x . . . x R-k where R-i (for 1 <= i <= k) is an interval of the form [a(i), a(i) + 1] on the real line. The k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that the k-cubes corresponding to two vertices in G have a non-empty intersection if and only if the vertices are adjacent. The cubicity of a graph G, denoted as cub(G), is defined as the minimum dimension k such that G has a k-cube representation. An interval graph is a graph that can be represented as the intersection of intervals on the real line - i. e., the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. We show that for any interval graph G with maximum degree Delta, cub(G) <= inverted right perpendicular log(2) Delta inverted left perpendicular + 4. This upper bound is shown to be tight up to an additive constant of 4 by demonstrating interval graphs for which cubicity is equal to inverted right perpendicular log(2) Delta inverted left perpendicular.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic (2-colored) cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). Let Delta = Delta(G) denote the maximum degree of a vertex in a graph G. A complete bipartite graph with n vertices on each side is denoted by K-n,K-n. Alon, McDiarmid and Reed observed that a'(K-p-1,K-p-1) = p for every prime p. In this paper we prove that a'(K-p,K-p) <= p + 2 = Delta + 2 when p is prime. Basavaraju, Chandran and Kummini proved that a'(K-n,K-n) >= n + 2 = Delta + 2 when n is odd, which combined with our result implies that a'(K-p,K-p) = p + 2 = Delta + 2 when p is an odd prime. Moreover we show that if we remove any edge from K-p,K-p, the resulting graph is acyclically Delta + 1 = p + 1-edge-colorable. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and it is denoted by a′(G). From a result of Burnstein it follows that all subcubic graphs are acyclically edge colorable using five colors. This result is tight since there are 3-regular graphs which require five colors. In this paper we prove that any non-regular connected graph of maximum degree 3 is acyclically edge colorable using at most four colors. This result is tight since all edge maximal non-regular connected graphs of maximum degree 3 require four colors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Suclakov and Zaks (and earlier by Fiamcik) that a'(G) <= Delta+2, where Delta = Delta(G) denotes the maximum degree of the graph. Alon et al. also raised the question whether the complete graphs of even order are the only regular graphs which require Delta+2 colors to be acyclically edge colored. In this article, using a simple counting argument we observe not only that this is not true, but in fact all d-regular graphs with 2n vertices and d>n, requires at least d+2 colors. We also show that a'(K-n,K-n) >= n+2, when n is odd using a more non-trivial argument. (Here K-n,K-n denotes the complete bipartite graph with n vertices on each side.) This lower bound for Kn,n can be shown to be tight for some families of complete bipartite graphs and for small values of n. We also infer that for every d, n such that d >= 5, n >= 2d+3 and dn even, there exist d-regular graphs which require at least d+2-colors to be acyclically edge colored. (C) 2009 Wiley Periodicals, Inc. J Graph Theory 63: 226-230, 2010.