882 resultados para Multiobjective genetic algorithm
Resumo:
Autor proof
Resumo:
The selective collection of municipal solid waste for recycling is a very complex and expensive process, where a major issue is to perform cost-efficient waste collection routes. Despite the abundance of commercially available software for fleet management, they often lack the capability to deal properly with sequencing problems and dynamic revision of plans and schedules during process execution. Our approach to achieve better solutions for the waste collection process is to model it as a vehicle routing problem, more specifically as a team orienteering problem where capacity constraints on the vehicles are considered, as well as time windows for the waste collection points and for the vehicles. The final model is called capacitated team orienteering problem with double time windows (CTOPdTW).We developed a genetic algorithm to solve routing problems in waste collection modelled as a CTOPdTW. The results achieved suggest possible reductions of logistic costs in selective waste collection.
Resumo:
Earthworks involve the levelling or shaping of a target area through the moving or processing of the ground surface. Most construction projects require earthworks, which are heavily dependent on mechanical equipment (e.g., excavators, trucks and compactors). Often, earthworks are the most costly and time-consuming component of infrastructure constructions (e.g., road, railway and airports) and current pressure for higher productivity and safety highlights the need to optimize earthworks, which is a nontrivial task. Most previous attempts at tackling this problem focus on single-objective optimization of partial processes or aspects of earthworks, overlooking the advantages of a multi-objective and global optimization. This work describes a novel optimization system based on an evolutionary multi-objective approach, capable of globally optimizing several objectives simultaneously and dynamically. The proposed system views an earthwork construction as a production line, where the goal is to optimize resources under two crucial criteria (costs and duration) and focus the evolutionary search (non-dominated sorting genetic algorithm-II) on compaction allocation, using linear programming to distribute the remaining equipment (e.g., excavators). Several experiments were held using real-world data from a Portuguese construction site, showing that the proposed system is quite competitive when compared with current manual earthwork equipment allocation.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
The decision support models in intensive care units are developed to support medical staff in their decision making process. However, the optimization of these models is particularly difficult to apply due to dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge from large volumes of data, in order to obtain better predictive results than the current algorithms. To test the optimization techniques a case study with real data provided by INTCare project was explored. This data is concerning to extubation cases. In this dataset, several models like Evolutionary Fuzzy Rule Learning, Lazy Learning, Decision Trees and many others were analysed in order to detect early extubation. The hydrids Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate 93.2%, 93.1%, 92.97% respectively, thus showing their feasibility to work in a real environment.
Resumo:
In this paper we explore the effect of bounded rationality on the convergence of individual behavior toward equilibrium. In the context of a Cournot game with a unique and symmetric Nash equilibrium, firms are modeled as adaptive economic agents through a genetic algorithm. Computational experiments show that (1) there is remarkable heterogeneity across identical but boundedly rational agents; (2) such individual heterogeneity is not simply a consequence of the random elements contained in the genetic algorithm; (3) the more rational agents are in terms of memory abilities and pre-play evaluation of strategies, the less heterogeneous they are in their actions. At the limit case of full rationality, the outcome converges to the standard result of uniform individual behavior.
Resumo:
L’èxit del Projecte Genoma Humà (PGH) l’any 2000 va fer de la “medicina personalitzada” una realitat més propera. Els descobriments del PGH han simplificat les tècniques de seqüenciació de tal manera que actualment qualsevol persona pot aconseguir la seva seqüència d’ADN complerta. La tecnologia de Read Mapping destaca en aquest tipus de tècniques i es caracteritza per manegar una gran quantitat de dades. Hadoop, el framework d’Apache per aplicacions intensives de dades sota el paradigma Map Reduce, resulta un aliat perfecte per aquest tipus de tecnologia i ha sigut l’opció escollida per a realitzar aquest projecte. Durant tot el treball es realitza l’estudi, l’anàlisi i les experimentacions necessàries per aconseguir un Algorisme Genètic innovador que utilitzi tot el potencial de Hadoop.
Resumo:
Literature from 1928 through 2004 was compiled from different document sources published in Mexico or elsewhere. From these 907 publications, we found 19 different topics of Chagas disease study in Mexico. The publications were arranged by decade and also by state. This information was used to construct maps describing the distribution of Chagas disease according to different criteria: the disease, vectors, reservoirs, and strains. One of the major problems confronting study of this zoonotic disease is the great biodiversity of the vector species; there are 30 different species, with at least 10 playing a major role in human infection. The high variability of climates and biogeographic regions further complicate study and understanding of the dynamics of this disease in each region of the country. We used a desktop Genetic Algorithm for Rule-Set Prediction procedure to provide ecological models of organism niches, offering improved flexibility for choosing predictive environmental and ecological data. This approach may help to identify regions at risk of disease, plan vector-control programs, and explore parasitic reservoir association. With this collected information, we have constructed a data base: CHAGMEX, available online in html format.
Resumo:
Understanding the different background landscapes in which malaria transmission occurs is fundamental to understanding malaria epidemiology and to designing effective local malaria control programs. Geology, geomorphology, vegetation, climate, land use, and anopheline distribution were used as a basis for an ecological classification of the state of Roraima, Brazil, in the northern Amazon Basin, focused on the natural history of malaria and transmission. We used unsupervised maximum likelihood classification, principal components analysis, and weighted overlay with equal contribution analyses to fine-scale thematic maps that resulted in clustered regions. We used ecological niche modeling techniques to develop a fine-scale picture of malaria vector distributions in the state. Eight ecoregions were identified and malaria-related aspects are discussed based on this classification, including 5 types of dense tropical rain forest and 3 types of savannah. Ecoregions formed by dense tropical rain forest were named as montane (ecoregion I), submontane (II), plateau (III), lowland (IV), and alluvial (V). Ecoregions formed by savannah were divided into steppe (VI, campos de Roraima), savannah (VII, cerrado), and wetland (VIII, campinarana). Such ecoregional mappings are important tools in integrated malaria control programs that aim to identify specific characteristics of malaria transmission, classify transmission risk, and define priority areas and appropriate interventions. For some areas, extension of these approaches to still-finer resolutions will provide an improved picture of malaria transmission patterns.
Resumo:
HEMOLIA (a project under European community’s 7th framework programme) is a new generation Anti-Money Laundering (AML) intelligent multi-agent alert and investigation system which in addition to the traditional financial data makes extensive use of modern society’s huge telecom data source, thereby opening up a new dimension of capabilities to all Money Laundering fighters (FIUs, LEAs) and Financial Institutes (Banks, Insurance Companies, etc.). This Master-Thesis project is done at AIA, one of the partners for the HEMOLIA project in Barcelona. The objective of this thesis is to find the clusters in a network drawn by using the financial data. An extensive literature survey has been carried out and several standard algorithms related to networks have been studied and implemented. The clustering problem is a NP-hard problem and several algorithms like K-Means and Hierarchical clustering are being implemented for studying several problems relating to sociology, evolution, anthropology etc. However, these algorithms have certain drawbacks which make them very difficult to implement. The thesis suggests (a) a possible improvement to the K-Means algorithm, (b) a novel approach to the clustering problem using the Genetic Algorithms and (c) a new algorithm for finding the cluster of a node using the Genetic Algorithm.
Resumo:
In this paper the core functions of an artificial intelligence (AI) for controlling a debris collector robot are designed and implemented. Using the robot operating system (ROS) as the base of this work a multi-agent system is built with abilities for task planning.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.