976 resultados para adaptive architecture
Resumo:
This paper presents a first approach of Evaluation Engine Architecture (EEA) as proposal to support adaptive integral assessment, in the context of a virtual learning environment. The goal of our research is design an evaluation engine tool to assist in the whole assessment process within the A2UN@ project, linking that tool with the other key elements of a learning design (learning task, learning resources and learning support). The teachers would define the relation between knowledge, competencies, activities, resources and type of assessment. Providing this relation is possible obtain more accurate estimations of student's knowledge for adaptive evaluations and future recommendations. The process is supported by usage of educational standards and specifications and for an integral user modelling
Resumo:
Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student
Resumo:
At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.
Resumo:
Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
In this paper the architecture of an experimental multiparadigmatic programming environment is sketched, showing how its parts combine together with application modules in order to perform the integration of program modules written in different programming languages and paradigms. Adaptive automata are special self-modifying formal state machines used as a design and implementation tool in the representation of complex systems. Adaptive automata have been proven to have the same formal power as Turing Machines. Therefore, at least in theory, arbitrarily complex systems may be modeled with adaptive automata. The present work briefly introduces such formal tool and presents case studies showing how to use them in two very different situations: the first one, in the name management module of a multi-paradigmatic and multi-language programming environment, and the second one, in an application program implementing an adaptive automaton that accepts a context-sensitive language.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This work presents a procedure for electric load forecasting based on adaptive multilayer feedforward neural networks trained by the Backpropagation algorithm. The neural network architecture is formulated by two parameters, the scaling and translation of the postsynaptic functions at each node, and the use of the gradient-descendent method for the adjustment in an iterative way. Besides, the neural network also uses an adaptive process based on fuzzy logic to adjust the network training rate. This methodology provides an efficient modification of the neural network that results in faster convergence and more precise results, in comparison to the conventional formulation Backpropagation algorithm. The adapting of the training rate is effectuated using the information of the global error and global error variation. After finishing the training, the neural network is capable to forecast the electric load of 24 hours ahead. To illustrate the proposed methodology it is used data from a Brazilian Electric Company. © 2003 IEEE.
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Immune responses have the important function of host defense and protection against pathogens. However, the immune response also causes inflammation and host tissue injury, termed immunopathology. For example, hepatitis B and C virus infection in humans cause immunopathological sequel with destruction of liver cells by the host's own immune response. Similarly, after infection with lymphocytic choriomeningitis virus (LCMV) in mice, the adaptive immune response causes liver cell damage, choriomeningitis and destruction of lymphoid organ architecture. The immunopathological sequel during LCMV infection has been attributed to cytotoxic CD8(+) T cells. However, we now show that during LCMV infection CD4(+) T cells selectively induced the destruction of splenic marginal zone and caused liver cell damage with elevated serum alanin-transferase (ALT) levels. The destruction of the splenic marginal zone by CD4(+) T cells included the reduction of marginal zone B cells, marginal zone macrophages and marginal zone metallophilic macrophages. Functionally, this resulted in an impaired production of neutralizing antibodies against LCMV. Furthermore, CD4(+) T cells reduced B cells with an IgM(high)IgD(low) phenotype (transitional stage 1 and 2, marginal zone B cells), whereas other B cell subtypes such as follicular type 1 and 2 and germinal center/memory B cells were not affected. Adoptive transfer of CD4(+) T cells lacking different important effector cytokines and cytolytic pathways such as IFNγ, TNFα, perforin and Fas-FasL interaction did reveal that these cytolytic pathways are redundant in the induction of immunopathological sequel in spleen. In conclusion, our results define an important role of CD4(+) T cells in the induction of immunopathology in liver and spleen. This includes the CD4(+) T cell mediated destruction of the splenic marginal zone with consecutively impaired protective neutralizing antibody responses.
Resumo:
The increasing complexity of current software systems is encouraging the development of self-managed software architectures, i.e. systems capable of reconfiguring their structure at runtime to fulfil a set of goals. Several approaches have covered different aspects of their development, but some issues remain open, such as the maintainability or the scalability of self-management subsystems. Centralized approaches, like self-adaptive architectures, offer good maintenance properties but do not scale well for large systems. On the contrary, decentralized approaches, like self-organising architectures, offer good scalability but are not maintainable: reconfiguration specifications are spread and often tangled with functional specifications. In order to address these issues, this paper presents an aspect-oriented autonomic reconfiguration approach where: (1) each subsystem is provided with self-management properties so it can evolve itself and the components that it is composed of; (2) self-management concerns are isolated and encapsulated into aspects, thus improving its reuse and maintenance. Povzetek: Predstavljen je pristop s samo-preoblikovanjem programske arhitekture.
Resumo:
INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.
Resumo:
The presented work aims to contribute towards the standardization and the interoperability off the Future Internet through an open and scalable architecture design. We present S³OiA as a syntactic/semantic Service-Oriented Architecture that allows the integration of any type of object or device, not mattering their nature, on the Internet of Things. Moreover, the architecture makes possible the use of underlying heterogeneous resources as a substrate for the automatic composition of complex applications through a semantic Triple Space paradigm. Created applications are dynamic and adaptive since they are able to evolve depending on the context where they are executed. The validation scenario of this architecture encompasses areas which are prone to involve human beings in order to promote personal autonomy, such as home-care automation environments and Ambient Assisted Living.
Resumo:
Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.