915 resultados para Simulated annealing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This proposal shows that ACO systems can be applied to problems of requirements selection in software incremental development, with the idea of obtaining better results of those produced by expert judgment alone. The evaluation of the ACO systems should be done through a compared analysis with greedy and simulated annealing algorithms, performing experiments with some problems instances

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g. evolutionary algorithms, ant colony optimisation and simulated annealing. More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS). This immune system inspired technique has already been useful in solving some computational problems. In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS. We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough. A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off. It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The structured representation of cases by attribute graphs in a Case-Based Reasoning (CBR) system for course timetabling has been the subject of previous research by the authors. In that system, the case base is organised as a decision tree and the retrieval process chooses those cases which are sub attribute graph isomorphic to the new case. The drawback of that approach is that it is not suitable for solving large problems. This paper presents a multiple-retrieval approach that partitions a large problem into small solvable sub-problems by recursively inputting the unsolved part of the graph into the decision tree for retrieval. The adaptation combines the retrieved partial solutions of all the partitioned sub-problems and employs a graph heuristic method to construct the whole solution for the new case. We present a methodology which is not dependant upon problem specific information and which, as such, represents an approach which underpins the goal of building more general timetabling systems. We also explore the question of whether this multiple-retrieval CBR could be an effective initialisation method for local search methods such as Hill Climbing, Tabu Search and Simulated Annealing. Significant results are obtained from a wide range of experiments. An evaluation of the CBR system is presented and the impact of the approach on timetabling research is discussed. We see that the approach does indeed represent an effective initialisation method for these approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A large number of heuristic algorithms have been developed over the years which have been aimed at solving examination timetabling problems. However, many of these algorithms have been developed specifically to solve one particular problem instance or a small subset of instances related to a given real-life problem. Our aim is to develop a more general system which, when given any exam timetabling problem, will produce results which are comparative to those of a specially designed heuristic for that problem. We are investigating a Case based reasoning (CBR) technique to select from a set of algorithms which have been applied successfully to similar problem instances in the past. The assumption in CBR is that similar problems have similar solutions. For our system, the assumption is that an algorithm used to find a good solution to one problem will also produce a good result for a similar problem. The key to the success of the system will be our definition of similarity between two exam timetabling problems. The study will be carried out by running a series of tests using a simple Simulated Annealing Algorithm on a range of problems with differing levels of similarity and examining the data sets in detail. In this paper an initial investigation of the key factors which will be involved in this measure is presented with a discussion of how the definition of good impacts on this.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A large number of heuristic algorithms have been developed over the years which have been aimed at solving examination timetabling problems. However, many of these algorithms have been developed specifically to solve one particular problem instance or a small subset of instances related to a given real-life problem. Our aim is to develop a more general system which, when given any exam timetabling problem, will produce results which are comparative to those of a specially designed heuristic for that problem. We are investigating a Case based reasoning (CBR) technique to select from a set of algorithms which have been applied successfully to similar problem instances in the past. The assumption in CBR is that similar problems have similar solutions. For our system, the assumption is that an algorithm used to find a good solution to one problem will also produce a good result for a similar problem. The key to the success of the system will be our definition of similarity between two exam timetabling problems. The study will be carried out by running a series of tests using a simple Simulated Annealing Algorithm on a range of problems with differing levels of similarity and examining the data sets in detail. In this paper an initial investigation of the key factors which will be involved in this measure is presented with a discussion of how the definition of good impacts on this.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The structured representation of cases by attribute graphs in a Case-Based Reasoning (CBR) system for course timetabling has been the subject of previous research by the authors. In that system, the case base is organised as a decision tree and the retrieval process chooses those cases which are sub attribute graph isomorphic to the new case. The drawback of that approach is that it is not suitable for solving large problems. This paper presents a multiple-retrieval approach that partitions a large problem into small solvable sub-problems by recursively inputting the unsolved part of the graph into the decision tree for retrieval. The adaptation combines the retrieved partial solutions of all the partitioned sub-problems and employs a graph heuristic method to construct the whole solution for the new case. We present a methodology which is not dependant upon problem specific information and which, as such, represents an approach which underpins the goal of building more general timetabling systems. We also explore the question of whether this multiple-retrieval CBR could be an effective initialisation method for local search methods such as Hill Climbing, Tabu Search and Simulated Annealing. Significant results are obtained from a wide range of experiments. An evaluation of the CBR system is presented and the impact of the approach on timetabling research is discussed. We see that the approach does indeed represent an effective initialisation method for these approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g. evolutionary algorithms, ant colony optimisation and simulated annealing. More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS). This immune system inspired technique has already been useful in solving some computational problems. In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS. We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough. A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off. It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As azeitonas de mesa são consumidas e apreciadas em todo o mundo e, embora a sua classificação comercial não seja legalmente exigida, o Conselho Oleícola Internacional sugere que seja regulamentada com base na avaliação sensorial por um painel de provadores. A implementação de tal requer o cumprimento de diretrizes estabelecidas pelo Conselho Oleícola Internacional, resultando numa tarefa complexa, demorada e cujas avaliações não estão isentas de subjetividade. Neste trabalho, pela primeira vez, uma língua eletrónica foi utilizada com o intuito de classificar azeitonas de mesa em categorias comerciais, estipuladas com base na presença e na mediana das intensidades do defeito organolético predominante percebido pelo painel de provadores. Modelos de discriminação lineares foram estabelecidos com base em subconjuntos de sinais potenciométricos de sensores da língua eletrónica, selecionados recorrendo ao algoritmo de arrefecimento simulado. Os desempenhos qualitativo de previsão dos modelos de classificação estabelecidos foram avaliados recorrendo à técnica de validação cruzada leave-one-out e à técnica de validação cruzada K-folds com repetição, que permite minimizar o risco de sobreajustamento, permitindo obter resultados mais realistas. O potencial desta abordagem qualitativa, baseada nos perfis eletroquímicos gerados pela língua eletrónica, foi satisfatoriamente demonstrado: (i) na classificação correta (sensibilidades ≥ 93%) de soluções padrão (ácido n-butírico, 2-mercaptoetanol e ácido ciclohexanocarboxílico) de acordo com o defeito sensorial que mimetizam (butírico, pútrido ou sapateira); (ii) na classificação correta (sensibilidades ≥ 93%) de amostras de referência de azeitonas e salmouras (presença de um defeito único intenso) de acordo com o tipo de defeito percebido (avinhado-avinagrado, butírico, mofo, pútrido ou sapateira), e selecionadas pelo painel de provadores; e, (iii) na classificação correta (sensibilidade ≥ 86%) de amostras de azeitonas de mesa com grande heterogeneidade, contendo um ou mais defeitos organoléticos percebidos pelo painel de provadores nas azeitona e/ou salmouras, de acordo com a sua categoria comercial (azeitona extra sem defeito, extra, 1ª escolha, 2ª escolha e azeitonas que não podem ser comercializadas como azeitonas de mesa). Por fim, a capacidade língua eletrónica em quantificar as medianas das intensidades dos atributos negativos detetados pelo painel nas azeitonas de mesa foi demonstrada recorrendo a modelos de regressão linear múltipla-algoritmo de arrefecimento simulado, com base em subconjuntos selecionados de sinais gerados pela língua eletrónica durante a análise potenciométrica das azeitonas e salmouras. O xii desempenho de previsão dos modelos quantitativos foi validado recorrendo às mesmas duas técnicas de validação cruzada. Os modelos estabelcidos para cada um dos 5 defeitos sensoriais presentes nas amostras de azeitona de mesa, permitiram quantificar satisfatoriamente as medianas das intensidades dos defeitos (R² ≥ 0,97). Assim, a qualidade satisfatória dos resultados qualitativos e quantitativos alcançados permite antever, pela primeira vez, uma possível aplicação prática das línguas eletrónicas como uma ferramenta de análise sensorial de defeitos em azeitonas de mesa, podendo ser usada como uma técnica rápida, económica e útil na avaliação organolética de atributos negativos, complementar à tradicional análise sensorial por um painel de provadores.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

International audience

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cuando nos enfrentamos a problemas reales haciendo uso de recursos computacionales, hemos de tener en cuenta que el número de posibles soluciones candidatas a tener en cuenta puede llegar a ser tan inmenso que abordarlas mediante técnicas algorítmicas clásicas, en la mayoría de los casos, pueden llegar a convertirse en un problema en sí mismo debido al gran coste en recursos que pueden llegar a generar. En este contexto, aspectos como el tiempo utilizado en la búsqueda de una solución mediante algoritmos de búsqueda exhaustiva tales como fuerza bruta, vuelta atrás, ramificación y poda, etc., puede llegar a ser prohibitivo en la práctica. Ante este problema que se nos plantea, podemos hacer un estudio sobre otros métodos, tales como los metaheurísticos, que, aunque no siempre aseguran la optimalidad de las soluciones producidas; tienen un tiempo de ejecución mucho menor que los métodos exhaustivos. En el presente trabajo hemos seleccionado dos problemas NP-completos de entre los más famosos de la literatura y hemos realizado un estudio de ambos. Concretamente, los problemas seleccionados han sido el TSP (Traveling Salesman Problem) y el problema de la Mochila 0-1. Por otro lado, hemos llevado a cabo un estudio sobre distintas metaheurísticas para poder resolver los problemas mencionados. Entre estas metaheurísticas, hemos seleccionado cuatro: metaheurísticas evolutivas, metaheurísticas inspiradas en colonias de hormigas, metaheurísticas simulated annealing (enfriamiento simulado) y metaheurísticas GRASP (Greedy Randomized Adaptive Search Procedure). Después de esto, cada problema ha sido resuelto aplicando tanto algoritmos de búsqueda exhaustiva como metaheurísticas. Una vez adaptados los algoritmos a la resolución de los problemas concretos, hemos realizado un estudio experimental, donde se realizaron comparativas de rendimiento. Finalmente, todo este trabajo ha sido plasmado en el desarrollo de una aplicación software, la cual consta de dos partes: una que contiene la implementación los algoritmos adaptados para la resolución de los problemas y que son ofrecidos a modo de servicios web y otra parte donde se ha implementado un cliente web que puede consumir estos servicios y realizar una presentación más vistosa de la ejecución de los algoritmos y los resultados obtenidos. Esta arquitectura podrá servir como base para futuras ampliaciones de este estudio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.