40 resultados para Greedy algorithms
Resumo:
It is known that the Minimum Weight Triangulation problem is NP-hard. Also the complexity of the Minimum Weight Pseudo-Triangulation problem is unknown, yet it is suspected to be also NP-hard. Therefore we focused on the development of approximate algorithms to find high quality triangulations and pseudo-triangulations of minimum weight. In this work we propose two metaheuristics to solve these problems: Ant Colony Optimization (ACO) and Simulated Annealing (SA). For the experimental study we have created a set of instances for MWT and MWPT problems, since no reference to benchmarks for these problems were found in the literature. Through experimental evaluation, we assess the applicability of the ACO and SA metaheuristics for MWT and MWPT problems. These results are compared with those obtained from the application of deterministic algorithms for the same problems (Delaunay Triangulation for MWT and a Greedy algorithm respectively for MWT and MWPT).
Resumo:
When non linear physical systems of infinite extent are modelled, such as tunnels and perforations, it is necessary to simulate suitably the solution in the infinite as well as the non linearity. The finite element method (FEM) is a well known procedure for simulating the non linear behavior. However, the treatment of the infinite field with domain truncations is often questionable. On the other hand, the boundary element method (BEM) is suitable to simulate the infinite behavior without truncations. Because of this, by the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM-BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.
Resumo:
The main objective of ventilation systems in case of fire is the reduction of the possible consequences by achieving the best possible conditions for the evacuation of the users and the intervention of the emergency services. The required immediate transition, from normal to emergency functioning of the ventilation equipments, is being strengthened by the use of automatic and semi-automatic control systems, what reduces the response times through the help to the operators, and the use of pre-defined strategies. A further step consists on the use of closed-loop algorithms, which takes into account not only the initial conditions but their development (air velocity, traffic situation, etc.), optimizing smoke control capacity.
Resumo:
This paper describes the design and evaluation of a new platform created in order to improve the learning experience of bilateral control algorithms in teleoperation. This experimental platform, developed at Universidad Politécnica de Madrid, is used by the students of the Master on Automation and Robotics in the practices of the subject called “Telerobotics and Teleoperation”. The main objective is to easily implement different control architectures in the developed platform and evaluate them under different conditions to better understand the main advantages and drawbacks of each control scheme. So, the student’s tasks are focused on adjusting the control parameters of the predefined controllers and designing new ones to analyze the changes in the behavior of the whole system. A description of the subject, main topics and the platform constructed are detailed in the paper. Furthermore, the methodology followed in the practices and the bilateral control algorithms are presented. Finally, the results obtained in the experiments with students are also shown.
Resumo:
Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document. Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Dimensionality Reduction (DR) is attracting more attention these days as a result of the increasing need to handle huge amounts of data effectively. DR methods allow the number of initial features to be reduced considerably until a set of them is found that allows the original properties of the data to be kept. However, their use entails an inherent loss of quality that is likely to affect the understanding of the data, in terms of data analysis. This loss of quality could be determinant when selecting a DR method, because of the nature of each method. In this paper, we propose a methodology that allows different DR methods to be analyzed and compared as regards the loss of quality produced by them. This methodology makes use of the concept of preservation of geometry (quality assessment criteria) to assess the loss of quality. Experiments have been carried out by using the most well-known DR algorithms and quality assessment criteria, based on the literature. These experiments have been applied on 12 real-world datasets. Results obtained so far show that it is possible to establish a method to select the most appropriate DR method, in terms of minimum loss of quality. Experiments have also highlighted some interesting relationships between the quality assessment criteria. Finally, the methodology allows the appropriate choice of dimensionality for reducing data to be established, whilst giving rise to a minimum loss of quality.
Resumo:
Automatic blood glucose classification may help specialists to provide a better interpretation of blood glucose data, downloaded directly from patients glucose meter and will contribute in the development of decision support systems for gestational diabetes. This paper presents an automatic blood glucose classifier for gestational diabetes that compares 6 different feature selection methods for two machine learning algorithms: neural networks and decision trees. Three searching algorithms, Greedy, Best First and Genetic, were combined with two different evaluators, CSF and Wrapper, for the feature selection. The study has been made with 6080 blood glucose measurements from 25 patients. Decision trees with a feature set selected with the Wrapper evaluator and the Best first search algorithm obtained the best accuracy: 95.92%.
Resumo:
At present, all methods in Evolutionary Computation are bioinspired by the fundamental principles of neo-Darwinism, as well as by a vertical gene transfer. Virus transduction is one of the key mechanisms of horizontal gene propagation in microorganisms (e.g. bacteria). In the present paper, we model and simulate a transduction operator, exploring the possible role and usefulness of transduction in a genetic algorithm. The genetic algorithm including transduction has been named PETRI (abbreviation of Promoting Evolution Through Reiterated Infection). Our results showed how PETRI approaches higher fitness values as transduction probability comes close to 100%. The conclusion is that transduction improves the performance of a genetic algorithm, assuming a population divided among several sub-populations or ?bacterial colonies?.
Resumo:
El artículo aborda el problema del encaje de diversas imágenes de una misma escena capturadas por escáner 3d para generar un único modelo tridimensional. Para ello se utilizaron algoritmos genéticos. ABSTRACT: This work introduces a solution based on genetic algorithms to find the overlapping area between two point cloud captures obtained from a three-dimensional scanner. Considering three translation coordinates and three rotation angles, the genetic algorithm evaluates the matching points in the overlapping area between the two captures given that transformation. Genetic simulated annealing is used to improve the accuracy of the results obtained by the genetic algorithm.
Resumo:
In this article, a novel approach to deal with the design of in-building wireless networks deployments is proposed. This approach known as MOQZEA (Multiobjective Quality Zone Based Evolutionary Algorithm) is a hybr id evolutionary algorithm adapted to use a novel fitness function, based on the definition of quality zones for the different objective functions considered. This approach is conceived to solve wireless network design problems without previous information of the required number of transmitters, considering simultaneously a high number of objective functions and optimizing multiple configuration parameters of the transmitters.
Resumo:
This paper describes the objectives, content, learning methodology and results of an online course on the History of Algorithms for engineering students at Polytechnic University of Madrid (UPM). This course is conducted in a virtual environment based on Moodle, with a student-centred educational model which includes a detailed planning of learning activities. Our experience indicates that this subject is highly motivating for students and the virtual environment facilitates competencies development
Resumo:
Plant diseases represent a major economic and environmental problem in agriculture and forestry. Upon infection, a plant develops symptoms that affect different parts of the plant causing a significant agronomic impact. As many such diseases spread in time over the whole crop, a system for early disease detection can aid to mitigate the losses produced by the plant diseases and can further prevent their spread [1]. In recent years, several mathematical algorithms of search have been proposed [2,3] that could be used as a non-invasive, fast, reliable and cost-effective methods to localize in space infectious focus by detecting changes in the profile of volatile organic compounds. Tracking scents and locating odor sources is a major challenge in robotics, on one hand because odour plumes consists of non-uniform intermittent odour patches dispersed by the wind and on the other hand because of the lack of precise and reliable odour sensors. Notwithstanding, we have develop a simple robotic platform to study the robustness and effectiveness of different search algorithms [4], with respect to specific problems to be found in their further application in agriculture, namely errors committed in the motion and sensing and to the existence of spatial constraints due to land topology or the presence of obstacles.
Resumo:
The diversity of bibliometric indices today poses the challenge of exploiting the relationships among them. Our research uncovers the best core set of relevant indices for predicting other bibliometric indices. An added difficulty is to select the role of each variable, that is, which bibliometric indices are predictive variables and which are response variables. This results in a novel multioutput regression problem where the role of each variable (predictor or response) is unknown beforehand. We use Gaussian Bayesian networks to solve the this problem and discover multivariate relationships among bibliometric indices. These networks are learnt by a genetic algorithm that looks for the optimal models that best predict bibliometric data. Results show that the optimal induced Gaussian Bayesian networks corroborate previous relationships between several indices, but also suggest new, previously unreported interactions. An extended analysis of the best model illustrates that a set of 12 bibliometric indices can be accurately predicted using only a smaller predictive core subset composed of citations, g-index, q2-index, and hr-index. This research is performed using bibliometric data on Spanish full professors associated with the computer science area.
Resumo:
Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.