950 resultados para S-graphs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This atlas presents information on fish eggs and temperature data collected from broadscale ichthyoplankton surveys conducted off the U.S. northeast coast from ]977 to 1987. Distribution and abundance information is provided for 33 taxa in the form of graphs and contoured egg-density maps by month and survey. Comments are included on interannual and interseasonal trends in spawning intensity. Data on 14 additional but less numerous taxa are provided in tabular form. (PDF file contains 316 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: The rate of growth of tropical tunas has been studied by various investigators using diverse methods. Hayashi (1957) examined methods to determine the age of tunas by interpreting growth patterns on the bony or hard parts, but the results proved unreliable. Moore (1951), Hennemuth (1961), and Davidoff (1963) studied the age and growth of yellowfin tuna by the analysis of size frequency distributions. Schaefer, Chatwin and Broadhead (1961), and Fink (ms.), estimated the rate of growth of yellowfin tuna from tagging data; their estimates gave a somewhat slower rate of growth than that obtained by the study of length-frequency distributions. For the yellowfin tuna, modal groups representing age groups can be identified and followed for relatively long periods of time in length-frequency graphs. This may not be possible, however, for other tropical tunas where the modal groups may not represent identifiable age groups; this appears to be the case for skipjack tuna (Schaefer, 1962). It is necessary, therefore, to devise a method of estimating the growth rates of such species without identifying the year classes. The technique described in this study, hereafter called the "increment technique", employs the measurement of the change in length per unit of time, with respect to mean body length, without the identification of year classes. This technique is applied here as a method of estimating the growth rate of yellowfin tuna from the entire Eastern Tropical Pacific, and from the Commission's northern statistical areas (Areas 01-04 and 08) as shown in Figure 1. The growth rates of yellowfin tuna from Area 02 (Hennemuth, 1961) and from the northern areas (Davidoff, 1963) have been described by the technique of tracing modal progressions of year classes, hereafter termed the "year class technique". The growth rate analyses performed by both techniques apply to the segment of the population which is captured by tuna fishing vessels. The results obtained by both methods are compared in this report. SPANISH: La tasa del crecimiento de los atunes tropicales ha sido estudiada por varios investigadores quienes usaron diversos métodos. Hayashi (1957) examinó los métodos para determinar la edad de los atunes interpretando las marcas del crecimiento de las partes óseas o duras, pero los resultados no han demostrado eficacia. Moore (1951), Hennemuth (1961) y Davidoff (1963) estudiaron la edad y el crecimiento del atún aleta amarilla por medio del análisis de las distribuciones de la frecuencia de tamaños. Schaefer, Chatwin y Broadhead (1961) y Fink (Ms.), estimaron la tasa del crecimiento del atún aleta amarilla valiéndose de los datos de la marcación de los peces; ambos estimaron una tasa del crecimiento algo más lenta que la que se obtiene mediante el estudio de las distribuciones de la frecuencia de longitudes. Para el atún aleta amarilla, los grupos modales que representan grupos de edad pueden ser identificados y seguidos durante períodos de tiempo relativamente largos en los gráficos de la frecuencia de longitudes. Sin embargo, ésto puede no ser posible para otros atunes tropicales para los cuales los grupos modales posiblemente no representan grupos de edad identificables; este parece ser el caso para el barrilete (Schaefer, 1962). Consecuentemente, es necesario idear un método para estimar las tasas del crecimiento de las mencionadas especies sin necesidad de identificar las clases anuales. La técnica descrita en este estudio, en adelante llamada la "técnica incremental", emplea la medida del cambio en la longitud por unidad de tiempo, con respecto al promedio de la longitud corporal, sin tener que identificar las clases anuales. Esta técnica se aplica aquí como un método para estimar la tasa del crecimiento del atún aleta amarilla de todo el Pacífico Oriental Tropical, y de las áreas estadísticas norteñas de la Comisión (Areas 01-04 y 08), como se muestra en la Figura 1. Las tasas del crecimiento del atún aleta amarilla del Area 02 (Hennemuth, 1961) y de las áreas del norte (Davidoff, 1963), han sido descritas por medio de una técnica que consiste en delinear las progresiones modales de las clases anuales, en adelante llamada la "técnica de la clase anual". Los análisis de la tasa del crecimiento llevados a cabo por ambas técnicas se refieren al segmento de la población capturada por embarcaciones pesqueras de atún. Los resultados obtenidos por ambos métodos se comparan en este informe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growth data obtained from a ten-year collection of scales from Maryland freshwater fish is presented, in this report,in graphs and tables especially designed to be useful for Maryland fishery management. (PDF contains 40 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The MLML DBASE family of programs described here provides many of the. algorithms used in oceanographic data reduction, general data manipulation and line graphs. These programs provide a consistent file structure for serial data typically encountered in oceanography. This introduction should provide enough general knowledge to explain the scope of the program and to run the basic MLML_DBASE programs. It is not intended as a programmer's guide. (PDF contains 50 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report outlines the NOAA spectroradiometer data processing system implemented by the MLML_DBASE programs. This is done by presenting the algorithms and graphs showing the effects of each step in the algorithms. [PDF contains 32 pages]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The autorotation of two tandem triangular cylinders at different gap distances is investigated by numerical simulations. At the Reynolds number of 200, three distinct regimes are observed with the increase of gap distance: namely, angular oscillation, quasi-periodic autorotation and ‘chaotic’ autorotation. For various gap distances, the characteristic of vortex shedding and vortex interaction are discussed. The phase graphs (angular acceleration vs. angular velocity) and the power spectra of moment are analyzed to characterize the motion of the cylinder. The Lyapunov exponent is also calculated to identify the existence of chaos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A tabulated summary is presented of the main fisheries data collected to date (1998) by the Nigerian-German Kainji Lake Fisheries Promotion Project, together with a current overview of the fishery. The data are given under the following sections: 1) Fishing localities and types; 2) Frame survey data; 3) Number of licensed fishermen by state; 4) Mesh size distribution; 5) Fishing net characteristics; 6) Fish yield; 7) Total annual fishing effort by gear type; 8) Total annual value of fish landed by gear type; 9) Graphs of effort and CPUE by gear type. (PDF contains 36 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.

As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.

One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.

Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.

In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.

If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.

Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.

This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.

In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El objetivo de este proyecto ha sido desarrollar una herramienta software que permita medir el rendimiento de redes con tecnología móvil 4G, también conocida como LTE. Para ello se ha creado un sistema software que está compuesto por una aplicación móvil y un servidor de aplicaciones. El sistema en conjunto realiza la función de recoger indicadores de calidad de la red móvil de diversa índole, que posteriormente son procesados utilizando herramientas software matemáticas, para así obtener gráficas y mapas que permiten analizar la situación y el rendimiento de una red 4G concreta. El desarrollo del software ha llegado a nivel de prototipo y se han realizado pruebas reales con él obteniendo resultados positivos de funcionamiento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Um Atlas Digital é um atlas que foi concebido através de técnicas computacionais e que, consequentemente, pode ser acessado através de um computador. Estruturado em um ambiente gráfico, além dos mapas, pode-se contar também com textos, fotografias, dados estatísticos, gráficos e tabelas. Por estar em meio digital existe a possibilidade de utilização de uma expressiva gama de temas, formatos e escalas. Nesta dissertação apresenta-se um protótipo de Atlas Digital como uma colaboração ao Sistema de Informação Municipal SIM, para o município de São João de Meriti, RJ. O referido SIM, que tem como meta os serviços municipais, visa atender ao próprio município, ao cidadão e a outros interessados na cidade, sendo as suas informações fundamentais para a melhoria da gestão das prefeituras. A pesquisa foi direcionada para o tema da habitabilidade, que consiste num conjunto de condições voltadas para a construção de habitat saudável, abrangendo temas físicos, psicológicos, sociais, culturais e ambientais. Dentro do tema habitabilidade, foram trabalhados os subtemas relativos a infraestrutura de abastecimento de água, esgoto, coleta de lixo, saúde e educação, esses subtemas foram confrontados entre si para uma comparação entre os bairros do município. O SIM e a habitabilidade são contemplados no plano diretor da cidade e representa uma grande parte da sustentação teórica da dissertação. A modelagem e implementação do protótipo do Atlas Digital foram feitas com auxílio de softwares gratuitos, sendo possível acessar mapas temáticos e outras informações sobre São João de Meriti

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As análises de erros são conduzidas antes de qualquer projeto a ser desenvolvido. A necessidade do conhecimento do comportamento do erro numérico em malhas estruturadas e não-estruturadas surge com o aumento do uso destas malhas nos métodos de discretização. Desta forma, o objetivo deste trabalho foi criar uma metodologia para analisar os erros de discretização gerados através do truncamento na Série de Taylor, aplicados às equações de Poisson e de Advecção-Difusão estacionárias uni e bidimensionais, utilizando-se o Método de Volumes Finitos em malhas do tipo Voronoi. A escolha dessas equações se dá devido a sua grande utilização em testes de novos modelos matemáticos e função de interpolação. Foram usados os esquemas Central Difference Scheme (CDS) e Upwind Difference Scheme(UDS) nos termos advectivos. Verificou-se a influência do tipo de condição de contorno e a posição do ponto gerador do volume na solução numérica. Os resultados analíticos foram confrontados com resultados experimentais para dois tipos de malhas de Voronoi, uma malha cartesiana e outra triangular comprovando a influência da forma do volume finito na solução numérica obtida. Foi percebido no estudo que a discretização usando o esquema CDS tem erros menores do que a discretização usando o esquema UDS conforme literatura. Também se percebe a diferença nos erros em volumes vizinhos nas malhas triangulares o que faz com que não se tenha uma uniformidade nos gráficos dos erros estudados. Percebeu-se que as malhas cartesianas com nó no centróide do volume tem menor erro de discretização do que malhas triangulares. Mas o uso deste tipo de malha depende da geometria do problema estudado