844 resultados para Computer Engineering|Electrical engineering|Computer science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broad goals of verifiable visualization rely on correct algorithmic implementations. We extend a framework for verification of isosurfacing implementations to check topological properties. Specifically, we use stratified Morse theory and digital topology to design algorithms which verify topological invariants. Our extended framework reveals unexpected behavior and coding mistakes in popular publicly available isosurface codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The irregular shape packing problem is approached. The container has a fixed width and an open dimension to be minimized. The proposed algorithm constructively creates the solution using an ordered list of items and a placement heuristic. Simulated annealing is the adopted metaheuristic to solve the optimization problem. A two-level algorithm is used to minimize the open dimension of the container. To ensure feasible layouts, the concept of collision free region is used. A collision free region represents all possible translations for an item to be placed and may be degenerated. For a moving item, the proposed placement heuristic detects the presence of exact fits (when the item is fully constrained by its surroundings) and exact slides (when the item position is constrained in all but one direction). The relevance of these positions is analyzed and a new placement heuristic is proposed. Computational comparisons on benchmark problems show that the proposed algorithm generated highly competitive solutions. Moreover, our algorithm updated some best known results. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In [1], the authors proposed a framework for automated clustering and visualization of biological data sets named AUTO-HDS. This letter is intended to complement that framework by showing that it is possible to get rid of a user-defined parameter in a way that the clustering stage can be implemented more accurately while having reduced computational complexity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we address the "skull-stripping" problem in 3D MR images. We propose a new method that employs an efficient and unique histogram analysis. A fundamental component of this analysis is an algorithm for partitioning a histogram based on the position of the maximum deviation from a Gaussian fit. In our experiments we use a comprehensive image database, including both synthetic and real MRI. and compare our method with other two well-known methods, namely BSE and BET. For all datasets we achieved superior results. Our method is also highly independent of parameter tuning and very robust across considerable variations of noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie's constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content-based image retrieval is still a challenging issue due to the inherent complexity of images and choice of the most discriminant descriptors. Recent developments in the field have introduced multidimensional projections to burst accuracy in the retrieval process, but many issues such as introduction of pattern recognition tasks and deeper user intervention to assist the process of choosing the most discriminant features still remain unaddressed. In this paper, we present a novel framework to CBIR that combines pattern recognition tasks, class-specific metrics, and multidimensional projection to devise an effective and interactive image retrieval system. User interaction plays an essential role in the computation of the final multidimensional projection from which image retrieval will be attained. Results have shown that the proposed approach outperforms existing methods, turning out to be a very attractive alternative for managing image data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let G be a graph on n vertices with maximum degree ?. We use the Lovasz local lemma to show the following two results about colourings ? of the edges of the complete graph Kn. If for each vertex v of Kn the colouring ? assigns each colour to at most (n - 2)/(22.4?2) edges emanating from v, then there is a copy of G in Kn which is properly edge-coloured by ?. This improves on a result of Alon, Jiang, Miller, and Pritikin [Random Struct. Algorithms 23(4), 409433, 2003]. On the other hand, if ? assigns each colour to at most n/(51?2) edges of Kn, then there is a copy of G in Kn such that each edge of G receives a different colour from ?. This proves a conjecture of Frieze and Krivelevich [Electron. J. Comb. 15(1), R59, 2008]. Our proofs rely on a framework developed by Lu and Szekely [Electron. J. Comb. 14(1), R63, 2007] for applying the local lemma to random injections. In order to improve the constants in our results we use a version of the local lemma due to Bissacot, Fernandez, Procacci, and Scoppola [preprint, arXiv:0910.1824]. (c) 2011 Wiley Periodicals, Inc. Random Struct. Alg., 40, 425436, 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to recent research carried out in the foundry sector, one of the most important concerns of the industries is to improve their production planning. A foundry production plan involves two dependent stages: (1) determining the alloys to be merged and (2) determining the lots that will be produced. The purpose of this study is to draw up plans of minimum production cost for the lot-sizing problem for small foundries. As suggested in the literature, the proposed heuristic addresses the problem stages in a hierarchical way. Firstly, the alloys are determined and, subsequently, the items that are produced from them. In this study, a knapsack problem as a tool to determine the items to be produced from furnace loading was proposed. Moreover, we proposed a genetic algorithm to explore some possible sets of alloys and to determine the production planning for a small foundry. Our method attempts to overcome the difficulties in finding good production planning presented by the method proposed in the literature. The computational experiments show that the proposed methods presented better results than the literature. Furthermore, the proposed methods do not need commercial software, which is favorable for small foundries. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the NP-hard problem of, given a simple graph G, to find a series-parallel subgraph of G with the maximum number of edges. The algorithm that, given a connected graph G, outputs a spanning tree of G, is a 1/2-approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has n-1 edges and any series-parallel graph on n vertices has at most 2n-3 edges. We present a 7/12 -approximation for this problem and results showing the limits of our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.