864 resultados para Vertex Coloring


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember’s signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sharp edges were first used for field ionisation mass spectrometry by Beckey. Although Cross and Robertson found that etched metal foils were more effective than razor blades for field ionisation, blades are very convenient for determination of field ionisation mass spectra, as reported by Robertson and Viney. The electric field at the vertex of a sharp edge can be calculated by the method of the conformal transformation. Here we give some equations for the field deduced with the assumption that the edge surface can be approximated by a hyperbola. We also compare two hyperbolae with radii of curvature at the vertex of 500 Angstrom and 1000 Angstrom with the profile of a commercial carbon-steel razor blade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On the presumption that a sharp edge may be represented by a hyperbola, a conformal transformation method is used to derive electric field equations for a sharp edge suspended above a flat plate. A further transformation is then introduced to give electric field components for a sharp edge suspended above a thin slit. Expressions are deduced for the field strength at the vertex of the edge in both arrangements. The calculated electric field components are used to compute ion trajectories in the simple edge/flat-plate case. The results are considered in relation to future study of ion focusing and unimolecular decomposition of ions in field ionization mass spectrometers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, Caspian Sea is in focus of more attentions than past because of its individualistic as the biggest lake in the world and the existing of very large oil and gas resources within it. Very large scale of oil pollution caused by development of oil exploration and excavation activities not only make problem for coastal facilities but also make severe damage on environment. In the first stage of this research, the location and quality of oil resources in offshore and onshore have been determined and then affected depletion factors on oil spill such as evaporation, emulsification, dissolution, sedimentation and so on have been studied. In second stage, sea hydrodynamics model is offered and tested by determination of governing hydrodynamic equations on sea currents and on pollution transportation in sea surface and by finding out main parameters in these equations such as Coriolis, bottom friction, wind and etc. this model has been calculated by using cell vertex finite volume method in an unstructured mesh domain. According to checked model; sea currents of Caspian Sea in different seasons of the year have been determined and in final stage different scenarios of oil spill movement in Caspian sea on various conditions have been investigated by modeling of three dimensional oil spill movement on surface (affected by sea currents) and on depth (affected by buoyancy, drag and gravity forces) by applying main above mentioned depletion factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O crescimento celular da microalga Haematococcus pluvialis e a bioprodução de carotenoides são influenciados pelas diferentes condições de cultivo. Dentre os corantes naturais, a astaxantina tem importante aplicação farmacêutica, cosmética e na indústria de alimentos. Este pigmento além de colorir, possui propriedades biológicas, dentre elas a atividade antioxidante. A produção de astaxantina através do cultivo de H. pluvialis pode alcançar até 4% do peso seco da microalga. O objetivo desse trabalho foi avaliar o crescimento celular, bem como a produção de carotenoides pela microalga Haematococcus pluvialis em diferentes condições de cultivo e a atividade antioxidante dos extratos carotenogênicos. Foram utilizados os meios autotróficos Blue Green-11 (BG-11), BAR (Barbera Medium) e BBM (Bold Basal Medium) e os meios mixotróficos BBM e glicose e BBM e acetato de sódio, empregando 10 ou 20% de inóculo em pHs iniciais de 6, 7 ou 8, aeração de 0,30 L.min-1 , sob iluminância de 6 Klx, 24±1ºC durante 15 dias em fotobiorreatores de 1 L. A concentração celular foi avaliada diariamente através de leitura de absorvância a 560 nm. A ruptura celular foi realizada através de 0,05 g de células secas com 2 mL de dimetilsulfóxido e a concentração de carotenoides totais determinada a partir de leitura espectrofotométrica a 474 nm. Os meios de cultivo BG-11, BBM e glicose e BBM e acetato de sódio apresentaram, respectivamente, o maior crescimento celular e produção de carotenoides totais de 0,64, 1,18 e 0,68 g.L-1 , e 3026,66, 2623,12 e 2635,38 µg.g-1 , empregando 10% de inóculo em pH inicial de 7. Com base nesses resultados, foram selecionados esses três meios para dar continuidade ao trabalho. O meio de cultivo BBM e acetato de sódio obteve o melhor valor de concentração celular máxima, com 1,29±0,07 g.L-1 e de carotenoides totais 5653,56 µg.g-1 empregando pH inicial de 7 e concentração de inóculo de 20%. Este meio foi selecionado para a realização dos cultivos com injeção de 30 % de CO2 uma vez ao dia durante 1 hora, realizados durante 22 dias, em pH inicial de 7 e 20% de inóculo, com 30% de injeção de CO2 uma vez ao dia durante 1 hora. Nestas condições o crescimento celular alcançou o máximo de 1,13 g.L-1 (10 dias), carotenoides totais específicos de 2949,91 µg.g-1 e volumétricos de 764,79 µg.g-1 .L-1 (22 dias). A capacidade antioxidante dos extratos carotenogênicos também foi avaliada pelos métodos DPPH, FRAP e ABTS, não sendo possível quantificá-la através do DPPH e FRAP. Por outro lado, utilizando o método ABTS, em 90 minutos de reação, o poder de inibição encontrado foi de 35,70 % μg-1 . Assim, a condição que mais se destaca é a utilização do meio de cultivo BBM e acetato de sódio, com pH inicial 7, com 20% de inóculo, 0,30 L.min-1 de aeração, 6 Klx e 24±1ºC, uma vez que o crescimento celular e a bioprodução de carotenoides foi significativamente superior quando comparada às demais condições estudadas. Além disso, os carotenoides produzidos pela H. pluvialis, nesta condição, apresentaram capacidade antioxidativa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an investigation of a simple generic hyper-heuristic approach upon a set of widely used constructive heuristics (graph coloring heuristics) in timetabling. Within the hyperheuristic framework, a Tabu Search approach is employed to search for permutations of graph heuristics which are used for constructing timetables in exam and course timetabling problems. This underpins a multi-stage hyper-heuristic where the Tabu Search employs permutations upon a different number of graph heuristics in two stages. We study this graph-based hyper-heuristic approach within the context of exploring fundamental issues concerning the search space of the hyper-heuristic (the heuristic space) and the solution space. Such issues have not been addressed in other hyper-heuristic research. These approaches are tested on both exam and course benchmark timetabling problems and are compared with the fine-tuned bespoke state-of-the-art approaches. The results are within the range of the best results reported in the literature. The approach described here represents a significantly more generally applicable approach than the current state of the art in the literature. Future work will extend this hyper-heuristic framework by employing methodologies which are applicable on a wider range of timetabling and scheduling problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the hybridization of two graph coloring heuristics (Saturation Degree and Largest Degree), and their application within a hyperheuristic for exam timetabling problems. Hyper-heuristics can be seen as algorithms which intelligently select appropriate algorithms/heuristics for solving a problem. We developed a Tabu Search based hyper-heuristic to search for heuristic lists (of graph heuristics) for solving problems and investigated the heuristic lists found by employing knowledge discovery techniques. Two hybrid approaches (involving Saturation Degree and Largest Degree) including one which employs Case Based Reasoning are presented and discussed. Both the Tabu Search based hyper-heuristic and the hybrid approaches are tested on random and real-world exam timetabling problems. Experimental results are comparable with the best state-of-the-art approaches (as measured against established benchmark problems). The results also demonstrate an increased level of generality in our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an investigation of a simple generic hyper-heuristic approach upon a set of widely used constructive heuristics (graph coloring heuristics) in timetabling. Within the hyperheuristic framework, a Tabu Search approach is employed to search for permutations of graph heuristics which are used for constructing timetables in exam and course timetabling problems. This underpins a multi-stage hyper-heuristic where the Tabu Search employs permutations upon a different number of graph heuristics in two stages. We study this graph-based hyper-heuristic approach within the context of exploring fundamental issues concerning the search space of the hyper-heuristic (the heuristic space) and the solution space. Such issues have not been addressed in other hyper-heuristic research. These approaches are tested on both exam and course benchmark timetabling problems and are compared with the fine-tuned bespoke state-of-the-art approaches. The results are within the range of the best results reported in the literature. The approach described here represents a significantly more generally applicable approach than the current state of the art in the literature. Future work will extend this hyper-heuristic framework by employing methodologies which are applicable on a wider range of timetabling and scheduling problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chains of interacting non-Abelian anyons with local interactions invariant under the action of the Drinfeld double of the dihedral group D-3 are constructed. Formulated as a spin chain the Hamiltonians are generated from commuting transfer matrices of an integrable vertex model for periodic and braided as well as open boundaries. A different anyonic model with the same local Hamiltonian is obtained within the fusion path formulation. This model is shown to be related to an integrable fusion interaction round the face model. Bulk and surface properties of the anyon chain are computed from the Bethe equations for the spin chain. The low-energy effective theories and operator content of the models (in both the spin chain and fusion path formulation) are identified from analytical and numerical studies of the finite-size spectra. For all boundary conditions considered the continuum theory is found to be a product of two conformal field theories. Depending on the coupling constants the factors can be a Z(4) parafermion or a M-(5,M-6) minimal model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study a one-dimensional lattice model of interacting spinless fermions. This model is integrable for both periodic and open boundary conditions; the latter case includes the presence of Grassmann valued non-diagonal boundary fields breaking the bulk U(1) symmetry of the model. Starting from the embedding of this model into a graded Yang-Baxter algebra, an infinite hierarchy of commuting transfer matrices is constructed by means of a fusion procedure. For certain values of the coupling constant related to anisotropies of the underlying vertex model taken at roots of unity, this hierarchy is shown to truncate giving a finite set of functional equations for the spectrum of the transfer matrices. For generic coupling constants, the spectral problem is formulated in terms of a functional (or TQ-)equation which can be solved by Bethe ansatz methods for periodic and diagonal open boundary conditions. Possible approaches for the solution of the model with generic non-diagonal boundary fields are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A weighted Bethe graph $B$ is obtained from a weighted generalized Bethe tree by identifying each set of children with the vertices of a graph belonging to a family $F$ of graphs. The operation of identifying the root vertex of each of $r$ weighted Bethe graphs to the vertices of a connected graph $\mathcal{R}$ of order $r$ is introduced as the $\mathcal{R}$-concatenation of a family of $r$ weighted Bethe graphs. It is shown that the Laplacian eigenvalues (when $F$ has arbitrary graphs) as well as the signless Laplacian and adjacency eigenvalues (when the graphs in $F$ are all regular) of the $\mathcal{R}$-concatenation of a family of weighted Bethe graphs can be computed (in a unified way) using the stable and low computational cost methods available for the determination of the eigenvalues of symmetric tridiagonal matrices. Unlike the previous results already obtained on this topic, the more general context of families of distinct weighted Bethe graphs is herein considered.