43 resultados para Prediction algorithms
em Instituto Politécnico do Porto, Portugal
Resumo:
This article investigates the limit cycle (LC) prediction of systems with backlash by means of the describing function (DF) when using discrete fractional-order (FO) algorithms. The DF is an approximate method that gives good estimates of LCs. The implementation of FO controllers requires the use of rational approximations, but such realizations produce distinct dynamic types of behavior. This study analyzes the accuracy in the prediction of LCs, namely their amplitude and frequency, when using several different algorithms. To illustrate this problem we use FO-PID algorithms in the control of systems with backlash.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Com a expansão da Televisão Digital e a convergência entre os meios de difusão convencionais e a televisão sobre IP, o número de canais disponíveis tem aumentado de forma gradual colocando o espectador numa situação de difícil escolha quanto ao programa a visionar. Sobrecarregados com uma grande quantidade de programas e informação associada, muitos espectadores desistem sistematicamente de ver um programa e tendem a efectuar zapping entre diversos canais ou a assistir sempre aos mesmos programas ou canais. Diante deste problema de sobrecarga de informação, os sistemas de recomendação apresentam-se como uma solução. Nesta tese pretende estudar-se algumas das soluções existentes dos sistemas de recomendação de televisão e desenvolver uma aplicação que permita a recomendação de um conjunto de programas que representem potencial interesse ao espectador. São abordados os principais conceitos da área dos algoritmos de recomendação e apresentados alguns dos sistemas de recomendação de programas de televisão desenvolvidos até à data. Para realizar as recomendações foram desenvolvidos dois algoritmos baseados respectivamente em técnicas de filtragem colaborativa e de filtragem de conteúdo. Estes algoritmos permitem através do cálculo da similaridade entre itens ou utilizadores realizar a predição da classificação que um utilizador atribuiria a um determinado item (programa de televisão, filme, etc.). Desta forma é possível avaliar o nível de potencial interesse que o utilizador terá em relação ao respectivo item. Os conjuntos de dados que descrevem as características dos programas (título, género, actores, etc.) são armazenados de acordo com a norma TV-Anytime. Esta norma de descrição de conteúdo multimédia apresenta a vantagem de ser especificamente vocacionada para conteúdo audiovisual e está disponível livremente. O conjunto de recomendações obtidas é apresentado ao utilizador através da interacção com uma aplicação Web que permite a integração de todos os componentes do sistema. Para validação do trabalho foi considerado um dataset de teste designado de htrec2011-movielens-2k e cujo conteúdo corresponde a um conjunto de filmes classificados por diversos utilizadores num ambiente real. Este conjunto de filmes possui, para além da classificações atribuídas pelos utilizadores, um conjunto de dados que descrevem o género, directores, realizadores e país de origem. Para validação final do trabalho foram realizados diversos testes dos quais o mais relevante correspondeu à avaliação da distância entre predições e valores reais e cujo objectivo é classificar a capacidade dos algoritmos desenvolvidos preverem com precisão as classificações que os utilizadores atribuiriam aos itens analisados.
Resumo:
The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.
Resumo:
Geostatistics has been successfully used to analyze and characterize the spatial variability of environmental properties. Besides giving estimated values at unsampled locations, it provides a measure of the accuracy of the estimate, which is a significant advantage over traditional methods used to assess pollution. In this work universal block kriging is novelty used to model and map the spatial distribution of salinity measurements gathered by an Autonomous Underwater Vehicle in a sea outfall monitoring campaign, with the aim of distinguishing the effluent plume from the receiving waters, characterizing its spatial variability in the vicinity of the discharge and estimating dilution. The results demonstrate that geostatistical methodology can provide good estimates of the dispersion of effluents that are very valuable in assessing the environmental impact and managing sea outfalls. Moreover, since accurate measurements of the plume’s dilution are rare, these studies might be very helpful in the future to validate dispersion models.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.
Resumo:
Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.
Resumo:
A MATLAB/SIMULINK-based simulator was employed for studies concerning the control of baker’s yeast fed-batch fermentation. Four control algorithms were implemented and compared: the classical PID control, two discrete versions- modified velocity and position algorithms, and a fuzzy law. The simulation package was seen to be an efficient tool for the simulation and tests of control strategies of the nonlinear process.
Resumo:
Adhesive joints are largely employed nowadays as a fast and effective joining process. The respective techniques for strength prediction have also improved over the years. Cohesive Zone Models (CZM’s) coupled to Finite Element Method (FEM) analyses surpass the limitations of stress and fracture criteria and allow modelling damage. CZM’s require the energy release rates in tension (Gn) and shear (Gs) and respective fracture energies in tension (Gnc) and shear (Gsc). Additionally, the cohesive strengths (tn0 for tension and ts0 for shear) must also be defined. In this work, the influence of the CZM parameters of a triangular CZM used to model a thin adhesive layer is studied, to estimate their effect on the predictions. Some conclusions were drawn for the accuracy of the simulation results by variations of each one of these parameters.
Resumo:
Despite the fact that their physical properties make them an attractive family of materials, composites machining can cause several damage modes such as delamination, fibre pull-out, thermal degradation, and others. Minimization of axial thrust force during drilling reduces the probability of delamination onset, as it has been demonstrated by analytical models based on linear elastic fracture mechanics (LEFM). A finite element model considering solid elements of the ABAQUS® software library and interface elements including a cohesive damage model was developed in order to simulate thrust forces and delamination onset during drilling. Thrust force results for delamination onset are compared with existing analytical models.