48 resultados para simple algorithms
em Instituto Politécnico do Porto, Portugal
Resumo:
Fuzzy logic controllers (FLC) are intelligent systems, based on heuristic knowledge, that have been largely applied in numerous areas of everyday life. They can be used to describe a linear or nonlinear system and are suitable when a real system is not known or too difficult to find their model. FLC provide a formal methodology for representing, manipulating and implementing a human heuristic knowledge on how to control a system. These controllers can be seen as artificial decision makers that operate in a closed-loop system, in real time. The main aim of this work was to develop a single optimal fuzzy controller, easily adaptable to a wide range of systems – simple to complex, linear to nonlinear – and able to control all these systems. Due to their efficiency in searching and finding optimal solution for high complexity problems, GAs were used to perform the FLC tuning by finding the best parameters to obtain the best responses. The work was performed using the MATLAB/SIMULINK software. This is a very useful tool that provides an easy way to test and analyse the FLC, the PID and the GAs in the same environment. Therefore, it was proposed a Fuzzy PID controller (FL-PID) type namely, the Fuzzy PD+I. For that, the controller was compared with the classical PID controller tuned with, the heuristic Ziegler-Nichols tuning method, the optimal Zhuang-Atherton tuning method and the GA method itself. The IAE, ISE, ITAE and ITSE criteria, used as the GA fitness functions, were applied to compare the controllers performance used in this work. Overall, and for most systems, the FL-PID results tuned with GAs were very satisfactory. Moreover, in some cases the results were substantially better than for the other PID controllers. The best system responses were obtained with the IAE and ITAE criteria used to tune the FL-PID and PID controllers.
Resumo:
Electricity markets are complex environments, involving numerous entities trying to obtain the best advantages and profits while limited by power-network characteristics and constraints.1 The restructuring and consequent deregulation of electricity markets introduced a new economic dimension to the power industry. Some observers have criticized the restructuring process, however, because it has failed to improve market efficiency and has complicated the assurance of reliability and fairness of operations. To study and understand this type of market, we developed the Multiagent Simulator of Competitive Electricity Markets (MASCEM) platform based on multiagent simulation. The MASCEM multiagent model includes players with strategies for bid definition, acting in forward, day-ahead, and balancing markets and considering both simple and complex bids. Our goal with MASCEM was to simulate as many market models and player types as possible. This approach makes MASCEM both a short- and mediumterm simulation as well as a tool to support long-term decisions, such as those taken by regulators. This article proposes a new methodology integrated in MASCEM for bid definition in electricity markets. This methodology uses reinforcement learning algorithms to let players perceive changes in the environment, thus helping them react to the dynamic environment and adapt their bids accordingly.
Resumo:
Swarm Intelligence generally refers to a problem-solving ability that emerges from the interaction of simple information-processing units. The concept of Swarm suggests multiplicity, distribution, stochasticity, randomness, and messiness. The concept of Intelligence suggests that problem-solving approach is successful considering learning, creativity, cognition capabilities. This paper introduces some of the theoretical foundations, the biological motivation and fundamental aspects of swarm intelligence based optimization techniques such Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Artificial Bees Colony (ABC) algorithms for scheduling optimization.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Esta tese descreve o desenvolvimento do hardware e do software de um sistema com a capacidade de reconhecer o número de passos que uma pessoa efectua durante uma actividade física. O sistema consiste num acelerómetro controlado por um microcontrolador, que comunica com um dispositivo móvel através de Bluetooth. De modo a realizar o sistema foi necessário analisar uma vasta bibliografia, para conhecer o estado da arte desta tecnologia, entender o princípio de funcionamento do protocolo Bluetooth e os conceitos biomecânicos por detrás da marcha humana. A proposta deste trabalho apresentava como elemento diferenciador do estado da arte o uso de um acelerómetro em conjunto com sensores de pressão. Com a conjugação destes sensores pretendia-se aumentar a precisão de um equipamento que normalmente não é reconhecido por essa característica. Contudo, a indisponibilidade dos sensores de pressão levou a que o sistema só fosse constituído pelo acelerómetro. Embora, o sistema foi projectado considerando que os sensores de pressão serão incluídos num futuro desenvolvimento. Neste trabalho foram desenvolvidos dois algoritmos para detectar os passos que uma pessoa executa, com pé onde é colocado o sensor, quando caminha ou corre. Num dos testes realizados o algoritmo da “aceleração composta” detectou 84% dos passos, enquanto o algoritmo da “aceleração simples”detectou 99%. A plataforma para a interface gráfica pretendia-se que fosse um telemóvel, contudo não foi possível obter um telemóvel que suporta-se o perfil SPP (Serial Port Profile), necessário para a comunicação com o módulo Bluetooth usado. A solução passou por usar como plataforma um computador portátil com Bluetooth, para o qual foi desenvolvido a aplicação “Pedómetro ISEP” em Visual Basic. O “Pedómetro ISEP” apresenta várias funcionalidades, entre elas destaca-se o cálculo da distância percorrida, da velocidade, e das calorias consumidas, bem como, o registo desses valores em tabelas e da possibilidade de desenhar os gráficos representativos do progresso do utilizador.
Resumo:
Paracetamol is among the most worldwide consumed pharmaceuticals. Although its occurrence in the environment is well documented, data about the presence of its metabolites and transformation products is very scarce. The present work describes the development of an analytical method for the simultaneous determination of paracetamol, its principal metabolite (paracetamol-glucuronide) and its main transformation product (p-aminophenol) based on solid phase extraction (SPE) and high performance liquid chromatography coupled to diode array detection (HPLC-DAD). The method was applied to analysis of river waters, showing to be suitable to be used in routine analysis. Different SPE sorbents were compared and the use of two Oasis WAX cartridges in tandem proved to be the most adequate approach for sample clean up and pre-concentration. Under optimized conditions, limits of detection in the range 40–67 ng/L were obtained, as well as mean recoveries between 60 and 110% with relative standard deviations (RSD) below 6%. Finally, the developed SPE-HPLC/DAD method was successfully applied to the analysis of the selected compounds in samples from seven rivers located in the north of Portugal. Nevertheless all the compounds were detected, it was the first time that paracetamol-glucuronide was found in river water at concentrations up to 3.57 μg/L.
Resumo:
The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Discrete time control systems require sample- and-hold circuits to perform the conversion from digital to analog. Fractional-Order Holds (FROHs) are an interpolation between the classical zero and first order holds and can be tuned to produce better system performance. However, the model of the FROH is somewhat hermetic and the design of the system becomes unnecessarily complicated. This paper addresses the modelling of the FROHs using the concepts of Fractional Calculus (FC). For this purpose, two simple fractional-order approximations are proposed whose parameters are estimated by a genetic algorithm. The results are simple to interpret, demonstrating that FC is a useful tool for the analysis of these devices.
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.
Resumo:
We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.