562 resultados para Heuristics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Identification of the structural domains of proteins is important for our understanding of the organizational principles and mechanisms of protein folding, and for insights into protein function and evolution. Algorithmic methods of dissecting protein of known structure into domains developed so far are based on an examination of multiple geometrical, physical and topological features. Successful as many of these approaches are, they employ a lot of heuristics, and it is not clear whether they illuminate any deep underlying principles of protein domain organization. Other well-performing domain dissection methods rely on comparative sequence analysis. These methods are applicable to sequences with known and unknown structure alike, and their success highlights a fundamental principle of protein modularity, but this does not directly improve our understanding of protein spatial structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apparent reversals in rotating trapezia have been regarded as evidence that human vision favours methods which are heuristic or form dependent. However, the argument is based on the assumption that general algorithmic methods would avoid the illusion, and that has never been clear. A general algorithm for interpreting moving parallels has been developed to address the issue. It handles a considerable range of stimuli successfully, but finds multiple interpretations in situations which correspond closely to those where apparent reversals occur. This strengthens the hypothesis that apparent reversals may occur when general algorithmic methods fail and heuristics are invoked as a stopgap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matching query interfaces is a crucial step in data integration across multiple Web databases. The problem is closely related to schema matching that typically exploits different features of schemas. Relying on a particular feature of schemas is not suffcient. We propose an evidential approach to combining multiple matchers using Dempster-Shafer theory of evidence. First, our approach views the match results of an individual matcher as a source of evidence that provides a level of confidence on the validity of each candidate attribute correspondence. Second, it combines multiple sources of evidence to get a combined mass function that represents the overall level of confidence, taking into account the match results of different matchers. Our combination mechanism does not require use of weighing parameters, hence no setting and tuning of them is needed. Third, it selects the top k attribute correspondences of each source attribute from the target schema based on the combined mass function. Finally it uses some heuristics to resolve any conflicts between the attribute correspondences of different source attributes. Our experimental results show that our approach is highly accurate and effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we investigate adaptive linear combinations of graph coloring heuristics with a heuristic modifier to address the examination timetabling problem. We invoke a normalisation strategy for each parameter in order to generalise the specific problem data. Two graph coloring heuristics were used in this study (largest degree and saturation degree). A score for the difficulty of assigning each examination was obtained from an adaptive linear combination of these two heuristics and examinations in the list were ordered based on this value. The examinations with the score value representing the higher difficulty were chosen for scheduling based on two strategies. We tested for single and multiple heuristics with and without a heuristic modifier with different combinations of weight values for each parameter on the Toronto and ITC2007 benchmark data sets. We observed that the combination of multiple heuristics with a heuristic modifier offers an effective way to obtain good solution quality. Experimental results demonstrate that our approach delivers promising results. We conclude that this adaptive linear combination of heuristics is a highly effective method and simple to implement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Policy choices in response to crisis may carry consequences both for distributive outcomes and for the future policy capacity of the state itself. In this paper, we use conceptual heuristics to interpret policy practice. We examine the underlying policy paradigms shaping Irish government decisions in the aftermath of the European financial and economic crisis. We distinguish between two such paradigms- market-conforming and social equity - and apply them to three reform themes: reconfiguration of public budgets, the public service pay bargain, and the organizational profile of state competences. Our findings entail lessons for understanding the malleability of policy choice, and how state policy choices in response to crisis are framed and implemented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In discrete choice experiments respondents are generally assumed to consider all of the attributes across each of the alternatives, and to choose their most preferred. However, results in this paper indicate that many respondents employ simplified lexicographic decision-making rules, whereby they have a ranking of the attributes, but their choice of an alternative is based solely on the level of their most important attribute(s). Not accounting for these simple decision-making heuristics introduces systemic errors and leads to biased point estimates, as they are a violation of the continuity axiom and a departure from the use of compensatory decision-making. In this paper the implications of lexicographic preferences are examined. In particular, using a mixed logit specification this paper investigates the sensitivity of individual-specific willingness to pay (WTP) estimates conditional on whether lexicographic decision-making rules are accounted for in the modelling of discrete choice responses. Empirical results are obtained from a discrete choice experiment that was carried out to address the value of a number of rural landscape attributes in Ireland

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, we investigate an adaptive decomposition and ordering strategy that automatically divides examinations into difficult and easy sets for constructing an examination timetable. The examinations in the difficult set are considered to be hard to place and hence are listed before the ones in the easy set in the construction process. Moreover, the examinations within each set are ordered using different strategies based on graph colouring heuristics. Initially, the examinations are placed into the easy set. During the construction process, examinations that cannot be scheduled are identified as the ones causing infeasibility and are moved forward in the difficult set to ensure earlier assignment in subsequent attempts. On the other hand, the examinations that can be scheduled remain in the easy set.

Within the easy set, a new subset called the boundary set is introduced to accommodate shuffling strategies to change the given ordering of examinations. The proposed approach, which incorporates different ordering and shuffling strategies, is explored on the Carter benchmark problems. The empirical results show that the performance of our algorithm is broadly comparable to existing constructive approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the application of an automated hybrid approach in addressing the university timetabling problem. The approach described is based on the nature-inspired artificial bee colony (ABC) algorithm. An ABC algorithm is a biologically-inspired optimization approach, which has been widely implemented in solving a range of optimization problems in recent years such as job shop scheduling and machine timetabling problems. Although the approach has proven to be robust across a range of problems, it is acknowledged within the literature that there currently exist a number of inefficiencies regarding the exploration and exploitation abilities. These inefficiencies can often lead to a slow convergence speed within the search process. Hence, this paper introduces a variant of the algorithm which utilizes a global best model inspired from particle swarm optimization to enhance the global exploration ability while hybridizing with the great deluge (GD) algorithm in order to improve the local exploitation ability. Using this approach, an effective balance between exploration and exploitation is attained. In addition, a traditional local search approach is incorporated within the GD algorithm with the aim of further enhancing the performance of the overall hybrid method. To evaluate the performance of the proposed approach, two diverse university timetabling datasets are investigated, i.e., Carter's examination timetabling and Socha course timetabling datasets. It should be noted that both problems have differing complexity and different solution landscapes. Experimental results demonstrate that the proposed method is capable of producing high quality solutions across both these benchmark problems, showing a good degree of generality in the approach. Moreover, the proposed method produces best results on some instances as compared with other approaches presented in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O transporte marítimo e o principal meio de transporte de mercadorias em todo o mundo. Combustíveis e produtos petrolíferos representam grande parte das mercadorias transportadas por via marítima. Sendo Cabo Verde um arquipelago o transporte por mar desempenha um papel de grande relevância na economia do país. Consideramos o problema da distribuicao de combustíveis em Cabo Verde, onde uma companhia e responsavel por coordenar a distribuicao de produtos petrolíferos com a gestão dos respetivos níveis armazenados em cada porto, de modo a satisfazer a procura dos varios produtos. O objetivo consiste em determinar políticas de distribuicão de combustíveis que minimizam o custo total de distribuiçao (transporte e operacões) enquanto os n íveis de armazenamento sao mantidos nos n íveis desejados. Por conveniencia, de acordo com o planeamento temporal, o prob¬lema e divido em dois sub-problemas interligados. Um de curto prazo e outro de medio prazo. Para o problema de curto prazo sao discutidos modelos matemáticos de programacao inteira mista, que consideram simultaneamente uma medicao temporal cont ínua e uma discreta de modo a modelar multiplas janelas temporais e taxas de consumo que variam diariamente. Os modelos sao fortalecidos com a inclusão de desigualdades validas. O problema e então resolvido usando um "software" comercial. Para o problema de medio prazo sao inicialmente discutidos e comparados varios modelos de programacao inteira mista para um horizonte temporal curto assumindo agora uma taxa de consumo constante, e sao introduzidas novas desigualdades validas. Com base no modelo escolhido sao compara¬das estrategias heurísticas que combinam três heur ísticas bem conhecidas: "Rolling Horizon", "Feasibility Pump" e "Local Branching", de modo a gerar boas soluçoes admissíveis para planeamentos com horizontes temporais de varios meses. Finalmente, de modo a lidar com situaçoes imprevistas, mas impor¬tantes no transporte marítimo, como as mas condicões meteorológicas e congestionamento dos portos, apresentamos um modelo estocastico para um problema de curto prazo, onde os tempos de viagens e os tempos de espera nos portos sao aleatórios. O problema e formulado como um modelo em duas etapas, onde na primeira etapa sao tomadas as decisões relativas as rotas do navio e quantidades a carregar e descarregar e na segunda etapa (designada por sub-problema) sao consideradas as decisoes (com recurso) relativas ao escalonamento das operacões. O problema e resolvido por um metodo de decomposto que usa um algoritmo eficiente para separar as desigualdades violadas no sub-problema.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network virtualisation is seen as a promising approach to overcome the so-called “Internet impasse” and bring innovation back into the Internet, by allowing easier migration towards novel networking approaches as well as the coexistence of complementary network architectures on a shared infrastructure in a commercial context. Recently, the interest from the operators and mainstream industry in network virtualisation has grown quite significantly, as the potential benefits of virtualisation became clearer, both from an economical and an operational point of view. In the beginning, the concept has been mainly a research topic and has been materialized in small-scale testbeds and research network environments. This PhD Thesis aims to provide the network operator with a set of mechanisms and algorithms capable of managing and controlling virtual networks. To this end, we propose a framework that aims to allocate, monitor and control virtual resources in a centralized and efficient manner. In order to analyse the performance of the framework, we performed the implementation and evaluation on a small-scale testbed. To enable the operator to make an efficient allocation, in real-time, and on-demand, of virtual networks onto the substrate network, it is proposed a heuristic algorithm to perform the virtual network mapping. For the network operator to obtain the highest profit of the physical network, it is also proposed a mathematical formulation that aims to maximize the number of allocated virtual networks onto the physical network. Since the power consumption of the physical network is very significant in the operating costs, it is important to make the allocation of virtual networks in fewer physical resources and onto physical resources already active. To address this challenge, we propose a mathematical formulation that aims to minimize the energy consumption of the physical network without affecting the efficiency of the allocation of virtual networks. To minimize fragmentation of the physical network while increasing the revenue of the operator, it is extended the initial formulation to contemplate the re-optimization of previously mapped virtual networks, so that the operator has a better use of its physical infrastructure. It is also necessary to address the migration of virtual networks, either for reasons of load balancing or for reasons of imminent failure of physical resources, without affecting the proper functioning of the virtual network. To this end, we propose a method based on cloning techniques to perform the migration of virtual networks across the physical infrastructure, transparently, and without affecting the virtual network. In order to assess the resilience of virtual networks to physical network failures, while obtaining the optimal solution for the migration of virtual networks in case of imminent failure of physical resources, the mathematical formulation is extended to minimize the number of nodes migrated and the relocation of virtual links. In comparison with our optimization proposals, we found out that existing heuristics for mapping virtual networks have a poor performance. We also found that it is possible to minimize the energy consumption without penalizing the efficient allocation. By applying the re-optimization on the virtual networks, it has been shown that it is possible to obtain more free resources as well as having the physical resources better balanced. Finally, it was shown that virtual networks are quite resilient to failures on the physical network.