964 resultados para S-antipodal graphs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. ©2003 Published by Elsevier Science B. V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] El presente manual describe el manejo de grafos de forma interactiva en el entorno 3D que proporciona el programa Xglore (http://sourceforge.net/projects/xglore/). Forma parte del proyecto “Nerthusv2: Base de datos léxica en 3D del inglés antiguo” patrocinado por el Ministerio de Ciencia e Innovación (nº: FFI08-04448/FILO).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report summarizes municipal use of water in 138 selected municipalities in Florida as of December 1970 and includes the following: 1) Tabulation of data on water-use for each listed municipality; 2) tabulation of chemical analyses of water for each listed municipality; and 3) graphs of pumpage, included when available. Also included are selected recent references relating to geology, hydrology, and water resources of those areas in which the municipalities are located. (218 page document)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Direcciones de correo electrónico de las autoras: Edurne Ortiz de Elguea (txintxe1989@holmail.com) y Priscila García (pelukina06@hotmail.com).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of complex networks has attracted the attention of the scientific community for many obvious reasons. A vast number of systems, from the brain to ecosystems, power grid, and the Internet, can be represented as large complex networks, i.e, assemblies of many interacting components with nontrivial topological properties. The link between these components can describe a global behaviour such as the Internet traffic, electricity supply service, market trend, etc. One of the most relevant topological feature of graphs representing these complex systems is community structure which aims to identify the modules and, possibly, their hierarchical organization, by only using the information encoded in the graph topology. Deciphering network community structure is not only important in order to characterize the graph topologically, but gives some information both on the formation of the network and on its functionality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This atlas presents information on fish eggs and temperature data collected from broadscale ichthyoplankton surveys conducted off the U.S. northeast coast from ]977 to 1987. Distribution and abundance information is provided for 33 taxa in the form of graphs and contoured egg-density maps by month and survey. Comments are included on interannual and interseasonal trends in spawning intensity. Data on 14 additional but less numerous taxa are provided in tabular form. (PDF file contains 316 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: The rate of growth of tropical tunas has been studied by various investigators using diverse methods. Hayashi (1957) examined methods to determine the age of tunas by interpreting growth patterns on the bony or hard parts, but the results proved unreliable. Moore (1951), Hennemuth (1961), and Davidoff (1963) studied the age and growth of yellowfin tuna by the analysis of size frequency distributions. Schaefer, Chatwin and Broadhead (1961), and Fink (ms.), estimated the rate of growth of yellowfin tuna from tagging data; their estimates gave a somewhat slower rate of growth than that obtained by the study of length-frequency distributions. For the yellowfin tuna, modal groups representing age groups can be identified and followed for relatively long periods of time in length-frequency graphs. This may not be possible, however, for other tropical tunas where the modal groups may not represent identifiable age groups; this appears to be the case for skipjack tuna (Schaefer, 1962). It is necessary, therefore, to devise a method of estimating the growth rates of such species without identifying the year classes. The technique described in this study, hereafter called the "increment technique", employs the measurement of the change in length per unit of time, with respect to mean body length, without the identification of year classes. This technique is applied here as a method of estimating the growth rate of yellowfin tuna from the entire Eastern Tropical Pacific, and from the Commission's northern statistical areas (Areas 01-04 and 08) as shown in Figure 1. The growth rates of yellowfin tuna from Area 02 (Hennemuth, 1961) and from the northern areas (Davidoff, 1963) have been described by the technique of tracing modal progressions of year classes, hereafter termed the "year class technique". The growth rate analyses performed by both techniques apply to the segment of the population which is captured by tuna fishing vessels. The results obtained by both methods are compared in this report. SPANISH: La tasa del crecimiento de los atunes tropicales ha sido estudiada por varios investigadores quienes usaron diversos métodos. Hayashi (1957) examinó los métodos para determinar la edad de los atunes interpretando las marcas del crecimiento de las partes óseas o duras, pero los resultados no han demostrado eficacia. Moore (1951), Hennemuth (1961) y Davidoff (1963) estudiaron la edad y el crecimiento del atún aleta amarilla por medio del análisis de las distribuciones de la frecuencia de tamaños. Schaefer, Chatwin y Broadhead (1961) y Fink (Ms.), estimaron la tasa del crecimiento del atún aleta amarilla valiéndose de los datos de la marcación de los peces; ambos estimaron una tasa del crecimiento algo más lenta que la que se obtiene mediante el estudio de las distribuciones de la frecuencia de longitudes. Para el atún aleta amarilla, los grupos modales que representan grupos de edad pueden ser identificados y seguidos durante períodos de tiempo relativamente largos en los gráficos de la frecuencia de longitudes. Sin embargo, ésto puede no ser posible para otros atunes tropicales para los cuales los grupos modales posiblemente no representan grupos de edad identificables; este parece ser el caso para el barrilete (Schaefer, 1962). Consecuentemente, es necesario idear un método para estimar las tasas del crecimiento de las mencionadas especies sin necesidad de identificar las clases anuales. La técnica descrita en este estudio, en adelante llamada la "técnica incremental", emplea la medida del cambio en la longitud por unidad de tiempo, con respecto al promedio de la longitud corporal, sin tener que identificar las clases anuales. Esta técnica se aplica aquí como un método para estimar la tasa del crecimiento del atún aleta amarilla de todo el Pacífico Oriental Tropical, y de las áreas estadísticas norteñas de la Comisión (Areas 01-04 y 08), como se muestra en la Figura 1. Las tasas del crecimiento del atún aleta amarilla del Area 02 (Hennemuth, 1961) y de las áreas del norte (Davidoff, 1963), han sido descritas por medio de una técnica que consiste en delinear las progresiones modales de las clases anuales, en adelante llamada la "técnica de la clase anual". Los análisis de la tasa del crecimiento llevados a cabo por ambas técnicas se refieren al segmento de la población capturada por embarcaciones pesqueras de atún. Los resultados obtenidos por ambos métodos se comparan en este informe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growth data obtained from a ten-year collection of scales from Maryland freshwater fish is presented, in this report,in graphs and tables especially designed to be useful for Maryland fishery management. (PDF contains 40 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The MLML DBASE family of programs described here provides many of the. algorithms used in oceanographic data reduction, general data manipulation and line graphs. These programs provide a consistent file structure for serial data typically encountered in oceanography. This introduction should provide enough general knowledge to explain the scope of the program and to run the basic MLML_DBASE programs. It is not intended as a programmer's guide. (PDF contains 50 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report outlines the NOAA spectroradiometer data processing system implemented by the MLML_DBASE programs. This is done by presenting the algorithms and graphs showing the effects of each step in the algorithms. [PDF contains 32 pages]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The autorotation of two tandem triangular cylinders at different gap distances is investigated by numerical simulations. At the Reynolds number of 200, three distinct regimes are observed with the increase of gap distance: namely, angular oscillation, quasi-periodic autorotation and ‘chaotic’ autorotation. For various gap distances, the characteristic of vortex shedding and vortex interaction are discussed. The phase graphs (angular acceleration vs. angular velocity) and the power spectra of moment are analyzed to characterize the motion of the cylinder. The Lyapunov exponent is also calculated to identify the existence of chaos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A tabulated summary is presented of the main fisheries data collected to date (1998) by the Nigerian-German Kainji Lake Fisheries Promotion Project, together with a current overview of the fishery. The data are given under the following sections: 1) Fishing localities and types; 2) Frame survey data; 3) Number of licensed fishermen by state; 4) Mesh size distribution; 5) Fishing net characteristics; 6) Fish yield; 7) Total annual fishing effort by gear type; 8) Total annual value of fish landed by gear type; 9) Graphs of effort and CPUE by gear type. (PDF contains 36 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.