953 resultados para Complex combinatorial problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The response of the Southern Ocean to a repeating seasonal cycle of ozone loss is studied in two coupled climate models and found to comprise both fast and slow processes. The fast response is similar to the inter-annual signature of the Southern Annular Mode (SAM) on Sea Surface Temperature (SST), on to which the ozone-hole forcing projects in the summer. It comprises enhanced northward Ekman drift inducing negative summertime SST anomalies around Antarctica, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year-round. The enhanced northward Ekman drift, however, results in upwelling of warm waters from below the mixed layer in the region of seasonal sea ice. With sustained bursts of westerly winds induced by ozone-hole depletion, this warming from below eventually dominates over the cooling from anomalous Ekman drift. The resulting slow-timescale response (years to decades) leads to warming of SSTs around Antarctica and ultimately a reduction in sea-ice cover year-round. This two-timescale behavior - rapid cooling followed by slow but persistent warming - is found in the two coupled models analysed, one with an idealized geometry, the other a complex global climate model with realistic geometry. Processes that control the timescale of the transition from cooling to warming, and their uncertainties are described. Finally we discuss the implications of our results for rationalizing previous studies of the effect of the ozone-hole on SST and sea-ice extent. %Interannual variability in the Southern Annular Mode (SAM) and sea ice covary such that an increase and southward shift in the surface westerlies (a positive phase of the SAM) coincides with a cooling of Sea Surface Temperature (SST) around 70-50$^\circ$S and an expansion of the sea ice cover, as seen in observations and models alike. Yet, in modeling studies, the Southern Ocean warms and sea ice extent decreases in response to sustained, multi-decadal positive SAM-like wind anomalies driven by 20th century ozone depletion. Why does the Southern Ocean appear to have disparate responses to SAM-like variability on interannual and multidecadal timescales? Here it is demonstrated that the response of the Southern Ocean to ozone depletion has a fast and a slow response. The fast response is similar to the interannual variability signature of the SAM. It is dominated by an enhanced northward Ekman drift, which transports heat northward and causes negative SST anomalies in summertime, earlier sea ice freeze-up the following winter, and northward expansion of the sea ice edge year round. The enhanced northward Ekman drift causes a region of Ekman divergence around 70-50$^\circ$S, which results in upwelling of warmer waters from below the mixed layer. With sustained westerly wind enhancement in that latitudinal band, the warming due to the anomalous upwelling of warm waters eventually dominates over the cooling from the anomalous Ekman drift. Hence, the slow response ultimately results in a positive SST anomaly and a reduction in the sea ice cover year round. We demonstrate this behavior in two models: one with an idealized geometry and another, more detailed, global climate model. However, the models disagree on the timescale of transition from the fast (cooling) to the slow (warming) response. Processes that controls this transition and their uncertainties are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the independent multi-plant, multi-period, and multi-item capacitated lot sizing problem where transfers between the plants are allowed. This is an NP-hard combinatorial optimization problem and few solution methods have been proposed to solve it. We develop a GRASP (Greedy Randomized Adaptive Search Procedure) heuristic as well as a path-relinking intensification procedure to find cost-effective solutions for this problem. In addition, the proposed heuristics is used to solve some instances of the capacitated lot sizing problem with parallel machines. The results of the computational tests show that the proposed heuristics outperform other heuristics previously described in the literature. The results are confirmed by statistical tests. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the late seventies, Megiddo proposed a way to use an algorithm for the problem of minimizing a linear function a(0) + a(1)x(1) + ... + a(n)x(n) subject to certain constraints to solve the problem of minimizing a rational function of the form (a(0) + a(1)x(1) + ... + a(n)x(n))/(b(0) + b(1)x(1) + ... + b(n)x(n)) subject to the same set of constraints, assuming that the denominator is always positive. Using a rather strong assumption, Hashizume et al. extended Megiddo`s result to include approximation algorithms. Their assumption essentially asks for the existence of good approximation algorithms for optimization problems with possibly negative coefficients in the (linear) objective function, which is rather unusual for most combinatorial problems. In this paper, we present an alternative extension of Megiddo`s result for approximations that avoids this issue and applies to a large class of optimization problems. Specifically, we show that, if there is an alpha-approximation for the problem of minimizing a nonnegative linear function subject to constraints satisfying a certain increasing property then there is an alpha-approximation (1 1/alpha-approximation) for the problem of minimizing (maximizing) a nonnegative rational function subject to the same constraints. Our framework applies to covering problems and network design problems, among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We solve the Bjorling problem for timelike surfaces in the Lorentz-Minkowski space through a split-complex representation formula obtained for this kind of surface. Our approach uses the split-complex numbers and natural split-holomorphic extensions. As applications, we show that the minimal timelike surfaces of revolution as well as minimal ruled timelike surfaces can be characterized as solutions of certain adequate Bjorling problems in the Lorentz-Minkowski space. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To connect different electrical, network and data devices with the minimum cost and shortest path, is a complex job. In huge buildings, where the devices are placed at different locations on different floors and only some specific routes are available to pass the cables and buses, the shortest path search becomes more complex. The aim of this thesis project is, to develop an application which indentifies the best path to connect all objects or devices by following the specific routes.To address the above issue we adopted three algorithms Greedy Algorithm, Simulated Annealing and Exhaustive search and analyzed their results. The given problem is similar to Travelling Salesman Problem. Exhaustive search is a best algorithm to solve this problem as it checks each and every possibility and give the accurate result but it is an impractical solution because of huge time consumption. If no. of objects increased from 12 it takes hours to search the shortest path. Simulated annealing is emerged with some promising results with lower time cost. As of probabilistic nature, Simulated annealing could be non optimal but it gives a near optimal solution in a reasonable duration. Greedy algorithm is not a good choice for this problem. So, simulated annealing is proved best algorithm for this problem. The project has been implemented in C-language which takes input and store output in an Excel Workbook

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tackling a problem requires mostly, an ability to read it, conceptualize it, represent it, define it, and then applying the necessary mechanisms to solve it. This may sound self-evident except when the problem to be tackled happens to be “complex, “ “ill-structured,” and/or “wicked.” Corruption is one of those kinds of problems. Both in its global and national manifestations it is ill-structured. Where it is structural in nature, endemic and pervasive, it is perhaps even wicked. Qualities of the kind impose modest expectations regarding possibilities of any definitive solution to this insidious phenomenon. If so, it may not suffice to address the problem of corruption using existing categories of law and/or good governance, which overlook the “long-term memory” of the collective and cultural specific dimensions of the subject. Such socio-historical conditions require focusing on the interactive and self-reproducing networks of corruption and attempting to ‘subvert’ that phenomenon’s entire matrix. Concepts such as collective responsibility, collective punishment and sanctions are introduced as relevant categories in the structural, as well as behavioral, subversion of some of the most prevalent aspects of corruption. These concepts may help in the evolving of a new perspective on corruption fighting strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In assessing the economic impact of a sector or group of sectors on a single or multiregional economy, input-output analysis has proven to be a popular method. . However, there has a problem in displaying all the information that can be obtained from this analytical approach. In this paper, we have tried to set new directions in the use of input-output analysis by presenting an improved way of looking at the economic landscapes. While this is not a new concept, a new meaning is explored in this paper; essentially, it will now be possible to visualize, in a simple picture, all the relations in the economy as well as being able to view how one sector is related to the other sectors/regions in the economy. These relations can be measured in terms of structural changes, production, value added, employment, imports, etc. While all the possibilities cannot be explored in this paper, the basic idea is given here and the smart reader can uncover all the various possibilities. To illustrate the power of analysis provided by the economic landscapes, an application is made to the sugar cane complex using an interregional inputoutput system for the Brazilian economy, constructed for 2 regions (Northeast and Rest of Brazil), for the years of 1985, 1992, and 1995.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Telecommunications play a key role in contemporary society. However, as new technologies are put into the market, it also grows the demanding for new products and services that depend on the offered infrastructure, making the problems of planning telecommunications networks, despite the advances in technology, increasingly larger and complex. However, many of these problems can be formulated as models of combinatorial optimization, and the use of heuristic algorithms can help solving these issues in the planning phase. In this project it was developed two pure metaheuristic implementations Genetic algorithm (GA) and Memetic Algorithm (MA) plus a third hybrid implementation Memetic Algorithm with Vocabulary Building (MA+VB) for a problem in telecommunications that is known in the literature as Problem SONET Ring Assignment Problem or SRAP. The SRAP arises during the planning stage of the physical network and it consists in the selection of connections between a number of locations (customers) in order to meet a series of restrictions on the lowest possible cost. This problem is NP-hard, so efficient exact algorithms (in polynomial complexity ) are not known and may, indeed, even exist