69 resultados para Adaptive algorithms
Resumo:
With the current increase of energy resources prices and environmental concerns intelligent load management systems are gaining more and more importance. This paper concerns a SCADA House Intelligent Management (SHIM) system that includes an optimization module using deterministic and genetic algorithm approaches. SHIM undertakes contextual load management based on the characterization of each situation. SHIM considers available generation resources, load demand, supplier/market electricity price, and consumers’ constraints and preferences. The paper focus on the recently developed learning module which is based on artificial neural networks (ANN). The learning module allows the adjustment of users’ profiles along SHIM lifetime. A case study considering a system with fourteen discrete and four variable loads managed by a SHIM system during five consecutive similar weekends is presented.
Resumo:
The aim of this paper is presenting the modules of the Adaptive Educational Hypermedia System PCMAT, responsible for the recommendation of learning objects. PCMAT is an online collaborative learning platform with a constructivist approach, which assesses the user’s knowledge and presents contents and activities adapted to the characteristics and learning style of students of mathematics in basic schools. The recommendation module and search and retrieval module choose the most adequate learning object, based on the user's characteristics and performance, and in this way contribute to the system’s adaptability.
Resumo:
This paper is about PCMAT, an adaptive learning platform for Mathematics in Basic Education schools. Based on a constructivist approach, PCMAT aims at verifying how techniques from adaptive hypermedia systems can improve e-learning based systems. To achieve this goal, PCMAT includes a Pedagogical Model that contains a set of adaptation rules that influence the student-platform interaction. PCMAT was subject to a preliminary testing with students aged between 12 and 14 years old on the subject of direct proportionality. The results from this preliminary test are quite promising as they seem to demonstrate the validity of our proposal.
Resumo:
The aim of this paper is presenting the recommendation module of the Mathematics Collaborative Learning Platform (PCMAT). PCMAT is an Adaptive Educational Hypermedia System (AEHS), with a constructivist approach, which presents contents and activities adapted to the characteristics and learning style of students of mathematics in basic schools. The recommendation module is responsible for choosing different learning resources for the platform, based on the user's characteristics and performance. Since the main purpose of an adaptive system is to provide the user with content and interface adaptation, the recommendation module is integral to PCMAT’s adaptation model.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
The main objective of an Adaptive System is to adequate its relation with the user (content presentation, navigation, interface, etc.) according to a predefined but updatable model of the user that reflects his objectives, preferences, knowledge and competences [Brusilovsky, 2001], [De Bra, 2004]. For Educational Adaptive Systems, the emphasis is placed on the student knowledge in the domain application and learning style, to allow him to reach the learning objectives proposed for his training [Chepegin, 2004]. In Educational AHS, the User Model (UM), or Student Model, has increased relevance: when the student reaches the objectives of the course, the system must be able to readapt, for example, to his knowledge [Brusilovsky, 2001]. Learning Styles are understood as something that intent to define models of how given person learns. Generally it is understood that each person has a Learning Style different and preferred with the objective of achieving better results. Some case studies have proposed that teachers should assess the learning styles of their students and adapt their classroom and methods to best fit each student's learning style [Kolb, 2005], [Martins, 2008]. The learning process must take into consideration the individual cognitive and emotional parts of the student. In summary each Student is unique so the Student personal progress must be monitored and teaching shoul not be not generalized and repetitive [Jonassen, 1991], [Martins, 2008]. The aim of this paper is to present an Educational Adaptive Hypermedia Tool based on Progressive Assessment.
Resumo:
This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling (UM), numerous research and developed systems already seem to promise good results, but some experimentation and implementation are still necessary to conclude about the utility of the UM. That is, the experimentation and implementation of these systems are still very scarce to determine the utility of some of the referred applications. At present, the Student Modeling research goes in the direction to make possible reuse a student model in different systems. The standards are more and more relevant for this effect, allowing systems communicate and to share data, components and structures, at syntax and semantic level, even if most of them still only allow syntax integration.
Resumo:
The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Real-time embedded applications require to process large amounts of data within small time windows. Parallelize and distribute workloads adaptively is suitable solution for computational demanding applications. The purpose of the Parallel Real-Time Framework for distributed adaptive embedded systems is to guarantee local and distributed processing of real-time applications. This work identifies some promising research directions for parallel/distributed real-time embedded applications.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Smartphones and other internet enabled devices are now common on our everyday life, thus unsurprisingly a current trend is to adapt desktop PC applications to execute on them. However, since most of these applications have quality of service (QoS) requirements, their execution on resource-constrained mobile devices presents several challenges. One solution to support more stringent applications is to offload some of the applications’ services to surrogate devices nearby. Therefore, in this paper, we propose an adaptable offloading mechanism which takes into account the QoS requirements of the application being executed (particularly its real-time requirements), whilst allowing offloading services to several surrogate nodes. We also present how the proposed computing model can be implemented in an Android environment
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.